00:00:00.000 Started by upstream project "autotest-per-patch" build number 132710 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.018 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.019 The recommended git tool is: git 00:00:00.019 using credential 00000000-0000-0000-0000-000000000002 00:00:00.021 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.038 Fetching changes from the remote Git repository 00:00:00.042 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.089 Using shallow fetch with depth 1 00:00:00.089 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.089 > git --version # timeout=10 00:00:00.125 > git --version # 'git version 2.39.2' 00:00:00.125 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.167 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.167 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.357 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.368 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.382 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.382 > git config core.sparsecheckout # timeout=10 00:00:03.396 > git read-tree -mu HEAD # timeout=10 00:00:03.414 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.438 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.438 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.555 [Pipeline] Start of Pipeline 00:00:03.570 [Pipeline] library 00:00:03.571 Loading library shm_lib@master 00:00:03.571 Library shm_lib@master is cached. Copying from home. 00:00:03.589 [Pipeline] node 00:00:03.600 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:03.602 [Pipeline] { 00:00:03.612 [Pipeline] catchError 00:00:03.614 [Pipeline] { 00:00:03.625 [Pipeline] wrap 00:00:03.633 [Pipeline] { 00:00:03.641 [Pipeline] stage 00:00:03.643 [Pipeline] { (Prologue) 00:00:03.660 [Pipeline] echo 00:00:03.661 Node: VM-host-WFP7 00:00:03.667 [Pipeline] cleanWs 00:00:03.676 [WS-CLEANUP] Deleting project workspace... 00:00:03.676 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.682 [WS-CLEANUP] done 00:00:03.912 [Pipeline] setCustomBuildProperty 00:00:03.999 [Pipeline] httpRequest 00:00:04.411 [Pipeline] echo 00:00:04.412 Sorcerer 10.211.164.20 is alive 00:00:04.420 [Pipeline] retry 00:00:04.422 [Pipeline] { 00:00:04.430 [Pipeline] httpRequest 00:00:04.433 HttpMethod: GET 00:00:04.434 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.434 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.450 Response Code: HTTP/1.1 200 OK 00:00:04.450 Success: Status code 200 is in the accepted range: 200,404 00:00:04.450 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.399 [Pipeline] } 00:00:07.412 [Pipeline] // retry 00:00:07.419 [Pipeline] sh 00:00:07.695 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.707 [Pipeline] httpRequest 00:00:08.904 [Pipeline] echo 00:00:08.906 Sorcerer 10.211.164.20 is alive 00:00:08.916 [Pipeline] retry 00:00:08.918 [Pipeline] { 00:00:08.933 [Pipeline] httpRequest 00:00:08.947 HttpMethod: GET 00:00:08.948 URL: http://10.211.164.20/packages/spdk_a4d2a837b70cb5b9861f9e94b2d33fada600e746.tar.gz 00:00:08.949 Sending request to url: http://10.211.164.20/packages/spdk_a4d2a837b70cb5b9861f9e94b2d33fada600e746.tar.gz 00:00:08.967 Response Code: HTTP/1.1 200 OK 00:00:08.968 Success: Status code 200 is in the accepted range: 200,404 00:00:08.968 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_a4d2a837b70cb5b9861f9e94b2d33fada600e746.tar.gz 00:02:01.550 [Pipeline] } 00:02:01.571 [Pipeline] // retry 00:02:01.581 [Pipeline] sh 00:02:01.874 + tar --no-same-owner -xf spdk_a4d2a837b70cb5b9861f9e94b2d33fada600e746.tar.gz 00:02:05.179 [Pipeline] sh 00:02:05.463 + git -C spdk log --oneline -n5 00:02:05.463 a4d2a837b lib/reduce: Unmap backing dev blocks - phase 2 00:02:05.463 02b805e62 lib/reduce: Unmap backing dev blocks 00:02:05.463 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:02:05.463 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:02:05.463 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:02:05.482 [Pipeline] writeFile 00:02:05.497 [Pipeline] sh 00:02:05.779 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:05.792 [Pipeline] sh 00:02:06.077 + cat autorun-spdk.conf 00:02:06.077 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.077 SPDK_RUN_ASAN=1 00:02:06.077 SPDK_RUN_UBSAN=1 00:02:06.077 SPDK_TEST_RAID=1 00:02:06.077 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:06.085 RUN_NIGHTLY=0 00:02:06.087 [Pipeline] } 00:02:06.102 [Pipeline] // stage 00:02:06.120 [Pipeline] stage 00:02:06.122 [Pipeline] { (Run VM) 00:02:06.137 [Pipeline] sh 00:02:06.425 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:06.425 + echo 'Start stage prepare_nvme.sh' 00:02:06.425 Start stage prepare_nvme.sh 00:02:06.425 + [[ -n 4 ]] 00:02:06.425 + disk_prefix=ex4 00:02:06.425 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:02:06.425 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:02:06.425 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:02:06.425 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.425 ++ SPDK_RUN_ASAN=1 00:02:06.425 ++ SPDK_RUN_UBSAN=1 00:02:06.425 ++ SPDK_TEST_RAID=1 00:02:06.425 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:06.425 ++ RUN_NIGHTLY=0 00:02:06.425 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:02:06.425 + nvme_files=() 00:02:06.425 + declare -A nvme_files 00:02:06.425 + backend_dir=/var/lib/libvirt/images/backends 00:02:06.425 + nvme_files['nvme.img']=5G 00:02:06.425 + nvme_files['nvme-cmb.img']=5G 00:02:06.425 + nvme_files['nvme-multi0.img']=4G 00:02:06.425 + nvme_files['nvme-multi1.img']=4G 00:02:06.425 + nvme_files['nvme-multi2.img']=4G 00:02:06.425 + nvme_files['nvme-openstack.img']=8G 00:02:06.425 + nvme_files['nvme-zns.img']=5G 00:02:06.425 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:06.425 + (( SPDK_TEST_FTL == 1 )) 00:02:06.425 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:06.425 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:06.425 + for nvme in "${!nvme_files[@]}" 00:02:06.425 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:02:06.425 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:06.425 + for nvme in "${!nvme_files[@]}" 00:02:06.425 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:02:06.425 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:06.425 + for nvme in "${!nvme_files[@]}" 00:02:06.425 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:02:06.425 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:06.425 + for nvme in "${!nvme_files[@]}" 00:02:06.425 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:02:06.426 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:06.426 + for nvme in "${!nvme_files[@]}" 00:02:06.426 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:02:06.426 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:06.426 + for nvme in "${!nvme_files[@]}" 00:02:06.426 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:02:06.426 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:06.426 + for nvme in "${!nvme_files[@]}" 00:02:06.426 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:02:06.685 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:06.685 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:02:06.685 + echo 'End stage prepare_nvme.sh' 00:02:06.685 End stage prepare_nvme.sh 00:02:06.699 [Pipeline] sh 00:02:06.987 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:06.988 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:02:06.988 00:02:06.988 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:02:06.988 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:02:06.988 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:02:06.988 HELP=0 00:02:06.988 DRY_RUN=0 00:02:06.988 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:02:06.988 NVME_DISKS_TYPE=nvme,nvme, 00:02:06.988 NVME_AUTO_CREATE=0 00:02:06.988 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:02:06.988 NVME_CMB=,, 00:02:06.988 NVME_PMR=,, 00:02:06.988 NVME_ZNS=,, 00:02:06.988 NVME_MS=,, 00:02:06.988 NVME_FDP=,, 00:02:06.988 SPDK_VAGRANT_DISTRO=fedora39 00:02:06.988 SPDK_VAGRANT_VMCPU=10 00:02:06.988 SPDK_VAGRANT_VMRAM=12288 00:02:06.988 SPDK_VAGRANT_PROVIDER=libvirt 00:02:06.988 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:06.988 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:06.988 SPDK_OPENSTACK_NETWORK=0 00:02:06.988 VAGRANT_PACKAGE_BOX=0 00:02:06.988 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:02:06.988 FORCE_DISTRO=true 00:02:06.988 VAGRANT_BOX_VERSION= 00:02:06.988 EXTRA_VAGRANTFILES= 00:02:06.988 NIC_MODEL=virtio 00:02:06.988 00:02:06.988 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:02:06.988 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:02:09.526 Bringing machine 'default' up with 'libvirt' provider... 00:02:10.096 ==> default: Creating image (snapshot of base box volume). 00:02:10.096 ==> default: Creating domain with the following settings... 00:02:10.096 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733457183_44b09b6503bd08108c2e 00:02:10.096 ==> default: -- Domain type: kvm 00:02:10.096 ==> default: -- Cpus: 10 00:02:10.096 ==> default: -- Feature: acpi 00:02:10.096 ==> default: -- Feature: apic 00:02:10.096 ==> default: -- Feature: pae 00:02:10.096 ==> default: -- Memory: 12288M 00:02:10.096 ==> default: -- Memory Backing: hugepages: 00:02:10.096 ==> default: -- Management MAC: 00:02:10.096 ==> default: -- Loader: 00:02:10.096 ==> default: -- Nvram: 00:02:10.096 ==> default: -- Base box: spdk/fedora39 00:02:10.096 ==> default: -- Storage pool: default 00:02:10.096 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733457183_44b09b6503bd08108c2e.img (20G) 00:02:10.096 ==> default: -- Volume Cache: default 00:02:10.096 ==> default: -- Kernel: 00:02:10.096 ==> default: -- Initrd: 00:02:10.096 ==> default: -- Graphics Type: vnc 00:02:10.096 ==> default: -- Graphics Port: -1 00:02:10.096 ==> default: -- Graphics IP: 127.0.0.1 00:02:10.096 ==> default: -- Graphics Password: Not defined 00:02:10.096 ==> default: -- Video Type: cirrus 00:02:10.096 ==> default: -- Video VRAM: 9216 00:02:10.096 ==> default: -- Sound Type: 00:02:10.096 ==> default: -- Keymap: en-us 00:02:10.096 ==> default: -- TPM Path: 00:02:10.096 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:10.096 ==> default: -- Command line args: 00:02:10.096 ==> default: -> value=-device, 00:02:10.096 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:10.096 ==> default: -> value=-drive, 00:02:10.096 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:02:10.096 ==> default: -> value=-device, 00:02:10.096 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.096 ==> default: -> value=-device, 00:02:10.096 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:10.096 ==> default: -> value=-drive, 00:02:10.096 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:10.096 ==> default: -> value=-device, 00:02:10.096 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.096 ==> default: -> value=-drive, 00:02:10.096 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:10.096 ==> default: -> value=-device, 00:02:10.096 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.096 ==> default: -> value=-drive, 00:02:10.096 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:10.096 ==> default: -> value=-device, 00:02:10.096 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.355 ==> default: Creating shared folders metadata... 00:02:10.355 ==> default: Starting domain. 00:02:11.734 ==> default: Waiting for domain to get an IP address... 00:02:29.869 ==> default: Waiting for SSH to become available... 00:02:29.869 ==> default: Configuring and enabling network interfaces... 00:02:35.142 default: SSH address: 192.168.121.87:22 00:02:35.142 default: SSH username: vagrant 00:02:35.142 default: SSH auth method: private key 00:02:37.045 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:47.050 ==> default: Mounting SSHFS shared folder... 00:02:47.990 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:47.990 ==> default: Checking Mount.. 00:02:49.398 ==> default: Folder Successfully Mounted! 00:02:49.398 ==> default: Running provisioner: file... 00:02:50.779 default: ~/.gitconfig => .gitconfig 00:02:51.040 00:02:51.040 SUCCESS! 00:02:51.040 00:02:51.040 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:02:51.040 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:51.040 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:02:51.040 00:02:51.050 [Pipeline] } 00:02:51.065 [Pipeline] // stage 00:02:51.075 [Pipeline] dir 00:02:51.076 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:02:51.077 [Pipeline] { 00:02:51.091 [Pipeline] catchError 00:02:51.093 [Pipeline] { 00:02:51.106 [Pipeline] sh 00:02:51.390 + + sed -ne /^Host/,$p 00:02:51.390 vagrant ssh-config --host vagrant 00:02:51.390 + tee ssh_conf 00:02:54.691 Host vagrant 00:02:54.691 HostName 192.168.121.87 00:02:54.691 User vagrant 00:02:54.691 Port 22 00:02:54.691 UserKnownHostsFile /dev/null 00:02:54.691 StrictHostKeyChecking no 00:02:54.691 PasswordAuthentication no 00:02:54.691 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:54.691 IdentitiesOnly yes 00:02:54.691 LogLevel FATAL 00:02:54.691 ForwardAgent yes 00:02:54.691 ForwardX11 yes 00:02:54.691 00:02:54.705 [Pipeline] withEnv 00:02:54.708 [Pipeline] { 00:02:54.723 [Pipeline] sh 00:02:55.004 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:55.004 source /etc/os-release 00:02:55.004 [[ -e /image.version ]] && img=$(< /image.version) 00:02:55.004 # Minimal, systemd-like check. 00:02:55.004 if [[ -e /.dockerenv ]]; then 00:02:55.004 # Clear garbage from the node's name: 00:02:55.004 # agt-er_autotest_547-896 -> autotest_547-896 00:02:55.004 # $HOSTNAME is the actual container id 00:02:55.004 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:55.004 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:55.004 # We can assume this is a mount from a host where container is running, 00:02:55.004 # so fetch its hostname to easily identify the target swarm worker. 00:02:55.004 container="$(< /etc/hostname) ($agent)" 00:02:55.004 else 00:02:55.004 # Fallback 00:02:55.004 container=$agent 00:02:55.004 fi 00:02:55.004 fi 00:02:55.004 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:55.004 00:02:55.275 [Pipeline] } 00:02:55.292 [Pipeline] // withEnv 00:02:55.300 [Pipeline] setCustomBuildProperty 00:02:55.315 [Pipeline] stage 00:02:55.317 [Pipeline] { (Tests) 00:02:55.335 [Pipeline] sh 00:02:55.620 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:55.892 [Pipeline] sh 00:02:56.175 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:56.452 [Pipeline] timeout 00:02:56.452 Timeout set to expire in 1 hr 30 min 00:02:56.454 [Pipeline] { 00:02:56.469 [Pipeline] sh 00:02:56.753 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:57.322 HEAD is now at a4d2a837b lib/reduce: Unmap backing dev blocks - phase 2 00:02:57.334 [Pipeline] sh 00:02:57.615 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:57.888 [Pipeline] sh 00:02:58.173 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:58.449 [Pipeline] sh 00:02:58.779 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:59.039 ++ readlink -f spdk_repo 00:02:59.039 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:59.039 + [[ -n /home/vagrant/spdk_repo ]] 00:02:59.039 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:59.039 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:59.039 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:59.039 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:59.039 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:59.039 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:59.039 + cd /home/vagrant/spdk_repo 00:02:59.039 + source /etc/os-release 00:02:59.039 ++ NAME='Fedora Linux' 00:02:59.039 ++ VERSION='39 (Cloud Edition)' 00:02:59.039 ++ ID=fedora 00:02:59.039 ++ VERSION_ID=39 00:02:59.039 ++ VERSION_CODENAME= 00:02:59.039 ++ PLATFORM_ID=platform:f39 00:02:59.039 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:59.039 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:59.039 ++ LOGO=fedora-logo-icon 00:02:59.039 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:59.039 ++ HOME_URL=https://fedoraproject.org/ 00:02:59.039 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:59.039 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:59.039 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:59.039 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:59.039 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:59.039 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:59.039 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:59.039 ++ SUPPORT_END=2024-11-12 00:02:59.039 ++ VARIANT='Cloud Edition' 00:02:59.039 ++ VARIANT_ID=cloud 00:02:59.039 + uname -a 00:02:59.039 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:59.039 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:59.608 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:59.608 Hugepages 00:02:59.608 node hugesize free / total 00:02:59.608 node0 1048576kB 0 / 0 00:02:59.608 node0 2048kB 0 / 0 00:02:59.608 00:02:59.608 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:59.608 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:59.608 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:59.608 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:02:59.608 + rm -f /tmp/spdk-ld-path 00:02:59.608 + source autorun-spdk.conf 00:02:59.608 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:59.608 ++ SPDK_RUN_ASAN=1 00:02:59.608 ++ SPDK_RUN_UBSAN=1 00:02:59.608 ++ SPDK_TEST_RAID=1 00:02:59.608 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:59.608 ++ RUN_NIGHTLY=0 00:02:59.608 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:59.608 + [[ -n '' ]] 00:02:59.608 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:59.867 + for M in /var/spdk/build-*-manifest.txt 00:02:59.867 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:59.867 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:59.867 + for M in /var/spdk/build-*-manifest.txt 00:02:59.867 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:59.867 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:59.867 + for M in /var/spdk/build-*-manifest.txt 00:02:59.867 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:59.867 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:59.867 ++ uname 00:02:59.867 + [[ Linux == \L\i\n\u\x ]] 00:02:59.867 + sudo dmesg -T 00:02:59.867 + sudo dmesg --clear 00:02:59.867 + dmesg_pid=5430 00:02:59.867 + [[ Fedora Linux == FreeBSD ]] 00:02:59.867 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:59.867 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:59.867 + sudo dmesg -Tw 00:02:59.867 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:59.867 + [[ -x /usr/src/fio-static/fio ]] 00:02:59.867 + export FIO_BIN=/usr/src/fio-static/fio 00:02:59.867 + FIO_BIN=/usr/src/fio-static/fio 00:02:59.867 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:59.867 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:59.867 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:59.867 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:59.867 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:59.867 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:59.867 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:59.867 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:59.867 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:59.867 03:53:53 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:59.867 03:53:53 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:59.867 03:53:53 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:59.867 03:53:53 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:59.867 03:53:53 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:59.867 03:53:53 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:59.867 03:53:53 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:59.867 03:53:53 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:02:59.867 03:53:53 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:59.867 03:53:53 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:00.127 03:53:53 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:00.127 03:53:53 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:00.127 03:53:53 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:00.127 03:53:53 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:00.127 03:53:53 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:00.127 03:53:53 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:00.127 03:53:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.127 03:53:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.127 03:53:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.127 03:53:53 -- paths/export.sh@5 -- $ export PATH 00:03:00.127 03:53:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.127 03:53:53 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:00.127 03:53:53 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:00.127 03:53:53 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733457233.XXXXXX 00:03:00.127 03:53:53 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733457233.ntypKF 00:03:00.127 03:53:53 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:00.127 03:53:53 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:00.127 03:53:53 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:00.127 03:53:53 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:00.127 03:53:53 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:00.127 03:53:53 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:00.127 03:53:53 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:00.127 03:53:53 -- common/autotest_common.sh@10 -- $ set +x 00:03:00.127 03:53:53 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:03:00.127 03:53:53 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:00.127 03:53:53 -- pm/common@17 -- $ local monitor 00:03:00.127 03:53:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.127 03:53:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.127 03:53:53 -- pm/common@25 -- $ sleep 1 00:03:00.127 03:53:53 -- pm/common@21 -- $ date +%s 00:03:00.127 03:53:53 -- pm/common@21 -- $ date +%s 00:03:00.127 03:53:53 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733457233 00:03:00.127 03:53:53 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733457233 00:03:00.127 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733457233_collect-vmstat.pm.log 00:03:00.127 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733457233_collect-cpu-load.pm.log 00:03:01.067 03:53:54 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:01.067 03:53:54 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:01.067 03:53:54 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:01.067 03:53:54 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:01.067 03:53:54 -- spdk/autobuild.sh@16 -- $ date -u 00:03:01.067 Fri Dec 6 03:53:54 AM UTC 2024 00:03:01.067 03:53:54 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:01.067 v25.01-pre-305-ga4d2a837b 00:03:01.067 03:53:54 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:01.067 03:53:54 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:01.067 03:53:54 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:01.067 03:53:54 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:01.067 03:53:54 -- common/autotest_common.sh@10 -- $ set +x 00:03:01.067 ************************************ 00:03:01.067 START TEST asan 00:03:01.067 ************************************ 00:03:01.067 using asan 00:03:01.067 03:53:54 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:03:01.067 00:03:01.067 real 0m0.000s 00:03:01.067 user 0m0.000s 00:03:01.067 sys 0m0.000s 00:03:01.067 03:53:54 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:01.067 03:53:54 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:01.067 ************************************ 00:03:01.067 END TEST asan 00:03:01.067 ************************************ 00:03:01.326 03:53:54 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:01.326 03:53:54 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:01.327 03:53:54 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:01.327 03:53:54 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:01.327 03:53:54 -- common/autotest_common.sh@10 -- $ set +x 00:03:01.327 ************************************ 00:03:01.327 START TEST ubsan 00:03:01.327 ************************************ 00:03:01.327 using ubsan 00:03:01.327 03:53:54 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:01.327 00:03:01.327 real 0m0.000s 00:03:01.327 user 0m0.000s 00:03:01.327 sys 0m0.000s 00:03:01.327 03:53:54 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:01.327 03:53:54 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:01.327 ************************************ 00:03:01.327 END TEST ubsan 00:03:01.327 ************************************ 00:03:01.327 03:53:54 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:01.327 03:53:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:01.327 03:53:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:01.327 03:53:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:01.327 03:53:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:01.327 03:53:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:01.327 03:53:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:01.327 03:53:54 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:01.327 03:53:54 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:03:01.327 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:01.327 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:01.895 Using 'verbs' RDMA provider 00:03:18.239 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:33.125 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:33.951 Creating mk/config.mk...done. 00:03:33.951 Creating mk/cc.flags.mk...done. 00:03:33.951 Type 'make' to build. 00:03:33.951 03:54:27 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:33.951 03:54:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:33.951 03:54:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:33.951 03:54:27 -- common/autotest_common.sh@10 -- $ set +x 00:03:33.951 ************************************ 00:03:33.951 START TEST make 00:03:33.951 ************************************ 00:03:33.951 03:54:27 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:34.517 make[1]: Nothing to be done for 'all'. 00:03:46.723 The Meson build system 00:03:46.723 Version: 1.5.0 00:03:46.723 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:46.723 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:46.723 Build type: native build 00:03:46.723 Program cat found: YES (/usr/bin/cat) 00:03:46.723 Project name: DPDK 00:03:46.723 Project version: 24.03.0 00:03:46.723 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:46.723 C linker for the host machine: cc ld.bfd 2.40-14 00:03:46.723 Host machine cpu family: x86_64 00:03:46.723 Host machine cpu: x86_64 00:03:46.723 Message: ## Building in Developer Mode ## 00:03:46.723 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:46.723 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:46.723 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:46.723 Program python3 found: YES (/usr/bin/python3) 00:03:46.723 Program cat found: YES (/usr/bin/cat) 00:03:46.723 Compiler for C supports arguments -march=native: YES 00:03:46.723 Checking for size of "void *" : 8 00:03:46.723 Checking for size of "void *" : 8 (cached) 00:03:46.723 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:46.723 Library m found: YES 00:03:46.723 Library numa found: YES 00:03:46.723 Has header "numaif.h" : YES 00:03:46.723 Library fdt found: NO 00:03:46.723 Library execinfo found: NO 00:03:46.723 Has header "execinfo.h" : YES 00:03:46.723 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:46.723 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:46.723 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:46.723 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:46.723 Run-time dependency openssl found: YES 3.1.1 00:03:46.723 Run-time dependency libpcap found: YES 1.10.4 00:03:46.723 Has header "pcap.h" with dependency libpcap: YES 00:03:46.723 Compiler for C supports arguments -Wcast-qual: YES 00:03:46.723 Compiler for C supports arguments -Wdeprecated: YES 00:03:46.723 Compiler for C supports arguments -Wformat: YES 00:03:46.723 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:46.723 Compiler for C supports arguments -Wformat-security: NO 00:03:46.723 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:46.723 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:46.723 Compiler for C supports arguments -Wnested-externs: YES 00:03:46.723 Compiler for C supports arguments -Wold-style-definition: YES 00:03:46.723 Compiler for C supports arguments -Wpointer-arith: YES 00:03:46.723 Compiler for C supports arguments -Wsign-compare: YES 00:03:46.723 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:46.723 Compiler for C supports arguments -Wundef: YES 00:03:46.723 Compiler for C supports arguments -Wwrite-strings: YES 00:03:46.723 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:46.723 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:46.723 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:46.723 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:46.723 Program objdump found: YES (/usr/bin/objdump) 00:03:46.723 Compiler for C supports arguments -mavx512f: YES 00:03:46.723 Checking if "AVX512 checking" compiles: YES 00:03:46.723 Fetching value of define "__SSE4_2__" : 1 00:03:46.723 Fetching value of define "__AES__" : 1 00:03:46.723 Fetching value of define "__AVX__" : 1 00:03:46.723 Fetching value of define "__AVX2__" : 1 00:03:46.723 Fetching value of define "__AVX512BW__" : 1 00:03:46.723 Fetching value of define "__AVX512CD__" : 1 00:03:46.723 Fetching value of define "__AVX512DQ__" : 1 00:03:46.723 Fetching value of define "__AVX512F__" : 1 00:03:46.723 Fetching value of define "__AVX512VL__" : 1 00:03:46.723 Fetching value of define "__PCLMUL__" : 1 00:03:46.723 Fetching value of define "__RDRND__" : 1 00:03:46.723 Fetching value of define "__RDSEED__" : 1 00:03:46.723 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:46.723 Fetching value of define "__znver1__" : (undefined) 00:03:46.723 Fetching value of define "__znver2__" : (undefined) 00:03:46.723 Fetching value of define "__znver3__" : (undefined) 00:03:46.723 Fetching value of define "__znver4__" : (undefined) 00:03:46.723 Library asan found: YES 00:03:46.723 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:46.723 Message: lib/log: Defining dependency "log" 00:03:46.723 Message: lib/kvargs: Defining dependency "kvargs" 00:03:46.723 Message: lib/telemetry: Defining dependency "telemetry" 00:03:46.723 Library rt found: YES 00:03:46.723 Checking for function "getentropy" : NO 00:03:46.723 Message: lib/eal: Defining dependency "eal" 00:03:46.723 Message: lib/ring: Defining dependency "ring" 00:03:46.723 Message: lib/rcu: Defining dependency "rcu" 00:03:46.723 Message: lib/mempool: Defining dependency "mempool" 00:03:46.723 Message: lib/mbuf: Defining dependency "mbuf" 00:03:46.723 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:46.723 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:46.723 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:46.723 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:46.723 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:46.723 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:46.723 Compiler for C supports arguments -mpclmul: YES 00:03:46.723 Compiler for C supports arguments -maes: YES 00:03:46.723 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:46.723 Compiler for C supports arguments -mavx512bw: YES 00:03:46.723 Compiler for C supports arguments -mavx512dq: YES 00:03:46.723 Compiler for C supports arguments -mavx512vl: YES 00:03:46.723 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:46.723 Compiler for C supports arguments -mavx2: YES 00:03:46.723 Compiler for C supports arguments -mavx: YES 00:03:46.723 Message: lib/net: Defining dependency "net" 00:03:46.723 Message: lib/meter: Defining dependency "meter" 00:03:46.723 Message: lib/ethdev: Defining dependency "ethdev" 00:03:46.723 Message: lib/pci: Defining dependency "pci" 00:03:46.723 Message: lib/cmdline: Defining dependency "cmdline" 00:03:46.723 Message: lib/hash: Defining dependency "hash" 00:03:46.723 Message: lib/timer: Defining dependency "timer" 00:03:46.723 Message: lib/compressdev: Defining dependency "compressdev" 00:03:46.723 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:46.723 Message: lib/dmadev: Defining dependency "dmadev" 00:03:46.723 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:46.723 Message: lib/power: Defining dependency "power" 00:03:46.723 Message: lib/reorder: Defining dependency "reorder" 00:03:46.723 Message: lib/security: Defining dependency "security" 00:03:46.723 Has header "linux/userfaultfd.h" : YES 00:03:46.723 Has header "linux/vduse.h" : YES 00:03:46.723 Message: lib/vhost: Defining dependency "vhost" 00:03:46.723 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:46.723 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:46.723 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:46.723 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:46.723 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:46.723 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:46.723 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:46.723 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:46.723 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:46.723 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:46.723 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:46.723 Configuring doxy-api-html.conf using configuration 00:03:46.723 Configuring doxy-api-man.conf using configuration 00:03:46.723 Program mandb found: YES (/usr/bin/mandb) 00:03:46.723 Program sphinx-build found: NO 00:03:46.723 Configuring rte_build_config.h using configuration 00:03:46.723 Message: 00:03:46.723 ================= 00:03:46.723 Applications Enabled 00:03:46.723 ================= 00:03:46.723 00:03:46.723 apps: 00:03:46.723 00:03:46.723 00:03:46.723 Message: 00:03:46.723 ================= 00:03:46.723 Libraries Enabled 00:03:46.723 ================= 00:03:46.723 00:03:46.723 libs: 00:03:46.723 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:46.723 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:46.723 cryptodev, dmadev, power, reorder, security, vhost, 00:03:46.723 00:03:46.723 Message: 00:03:46.723 =============== 00:03:46.723 Drivers Enabled 00:03:46.723 =============== 00:03:46.723 00:03:46.723 common: 00:03:46.723 00:03:46.723 bus: 00:03:46.723 pci, vdev, 00:03:46.723 mempool: 00:03:46.723 ring, 00:03:46.723 dma: 00:03:46.723 00:03:46.723 net: 00:03:46.723 00:03:46.723 crypto: 00:03:46.723 00:03:46.723 compress: 00:03:46.723 00:03:46.723 vdpa: 00:03:46.723 00:03:46.723 00:03:46.723 Message: 00:03:46.723 ================= 00:03:46.723 Content Skipped 00:03:46.723 ================= 00:03:46.723 00:03:46.723 apps: 00:03:46.723 dumpcap: explicitly disabled via build config 00:03:46.723 graph: explicitly disabled via build config 00:03:46.723 pdump: explicitly disabled via build config 00:03:46.724 proc-info: explicitly disabled via build config 00:03:46.724 test-acl: explicitly disabled via build config 00:03:46.724 test-bbdev: explicitly disabled via build config 00:03:46.724 test-cmdline: explicitly disabled via build config 00:03:46.724 test-compress-perf: explicitly disabled via build config 00:03:46.724 test-crypto-perf: explicitly disabled via build config 00:03:46.724 test-dma-perf: explicitly disabled via build config 00:03:46.724 test-eventdev: explicitly disabled via build config 00:03:46.724 test-fib: explicitly disabled via build config 00:03:46.724 test-flow-perf: explicitly disabled via build config 00:03:46.724 test-gpudev: explicitly disabled via build config 00:03:46.724 test-mldev: explicitly disabled via build config 00:03:46.724 test-pipeline: explicitly disabled via build config 00:03:46.724 test-pmd: explicitly disabled via build config 00:03:46.724 test-regex: explicitly disabled via build config 00:03:46.724 test-sad: explicitly disabled via build config 00:03:46.724 test-security-perf: explicitly disabled via build config 00:03:46.724 00:03:46.724 libs: 00:03:46.724 argparse: explicitly disabled via build config 00:03:46.724 metrics: explicitly disabled via build config 00:03:46.724 acl: explicitly disabled via build config 00:03:46.724 bbdev: explicitly disabled via build config 00:03:46.724 bitratestats: explicitly disabled via build config 00:03:46.724 bpf: explicitly disabled via build config 00:03:46.724 cfgfile: explicitly disabled via build config 00:03:46.724 distributor: explicitly disabled via build config 00:03:46.724 efd: explicitly disabled via build config 00:03:46.724 eventdev: explicitly disabled via build config 00:03:46.724 dispatcher: explicitly disabled via build config 00:03:46.724 gpudev: explicitly disabled via build config 00:03:46.724 gro: explicitly disabled via build config 00:03:46.724 gso: explicitly disabled via build config 00:03:46.724 ip_frag: explicitly disabled via build config 00:03:46.724 jobstats: explicitly disabled via build config 00:03:46.724 latencystats: explicitly disabled via build config 00:03:46.724 lpm: explicitly disabled via build config 00:03:46.724 member: explicitly disabled via build config 00:03:46.724 pcapng: explicitly disabled via build config 00:03:46.724 rawdev: explicitly disabled via build config 00:03:46.724 regexdev: explicitly disabled via build config 00:03:46.724 mldev: explicitly disabled via build config 00:03:46.724 rib: explicitly disabled via build config 00:03:46.724 sched: explicitly disabled via build config 00:03:46.724 stack: explicitly disabled via build config 00:03:46.724 ipsec: explicitly disabled via build config 00:03:46.724 pdcp: explicitly disabled via build config 00:03:46.724 fib: explicitly disabled via build config 00:03:46.724 port: explicitly disabled via build config 00:03:46.724 pdump: explicitly disabled via build config 00:03:46.724 table: explicitly disabled via build config 00:03:46.724 pipeline: explicitly disabled via build config 00:03:46.724 graph: explicitly disabled via build config 00:03:46.724 node: explicitly disabled via build config 00:03:46.724 00:03:46.724 drivers: 00:03:46.724 common/cpt: not in enabled drivers build config 00:03:46.724 common/dpaax: not in enabled drivers build config 00:03:46.724 common/iavf: not in enabled drivers build config 00:03:46.724 common/idpf: not in enabled drivers build config 00:03:46.724 common/ionic: not in enabled drivers build config 00:03:46.724 common/mvep: not in enabled drivers build config 00:03:46.724 common/octeontx: not in enabled drivers build config 00:03:46.724 bus/auxiliary: not in enabled drivers build config 00:03:46.724 bus/cdx: not in enabled drivers build config 00:03:46.724 bus/dpaa: not in enabled drivers build config 00:03:46.724 bus/fslmc: not in enabled drivers build config 00:03:46.724 bus/ifpga: not in enabled drivers build config 00:03:46.724 bus/platform: not in enabled drivers build config 00:03:46.724 bus/uacce: not in enabled drivers build config 00:03:46.724 bus/vmbus: not in enabled drivers build config 00:03:46.724 common/cnxk: not in enabled drivers build config 00:03:46.724 common/mlx5: not in enabled drivers build config 00:03:46.724 common/nfp: not in enabled drivers build config 00:03:46.724 common/nitrox: not in enabled drivers build config 00:03:46.724 common/qat: not in enabled drivers build config 00:03:46.724 common/sfc_efx: not in enabled drivers build config 00:03:46.724 mempool/bucket: not in enabled drivers build config 00:03:46.724 mempool/cnxk: not in enabled drivers build config 00:03:46.724 mempool/dpaa: not in enabled drivers build config 00:03:46.724 mempool/dpaa2: not in enabled drivers build config 00:03:46.724 mempool/octeontx: not in enabled drivers build config 00:03:46.724 mempool/stack: not in enabled drivers build config 00:03:46.724 dma/cnxk: not in enabled drivers build config 00:03:46.724 dma/dpaa: not in enabled drivers build config 00:03:46.724 dma/dpaa2: not in enabled drivers build config 00:03:46.724 dma/hisilicon: not in enabled drivers build config 00:03:46.724 dma/idxd: not in enabled drivers build config 00:03:46.724 dma/ioat: not in enabled drivers build config 00:03:46.724 dma/skeleton: not in enabled drivers build config 00:03:46.724 net/af_packet: not in enabled drivers build config 00:03:46.724 net/af_xdp: not in enabled drivers build config 00:03:46.724 net/ark: not in enabled drivers build config 00:03:46.724 net/atlantic: not in enabled drivers build config 00:03:46.724 net/avp: not in enabled drivers build config 00:03:46.724 net/axgbe: not in enabled drivers build config 00:03:46.724 net/bnx2x: not in enabled drivers build config 00:03:46.724 net/bnxt: not in enabled drivers build config 00:03:46.724 net/bonding: not in enabled drivers build config 00:03:46.724 net/cnxk: not in enabled drivers build config 00:03:46.724 net/cpfl: not in enabled drivers build config 00:03:46.724 net/cxgbe: not in enabled drivers build config 00:03:46.724 net/dpaa: not in enabled drivers build config 00:03:46.724 net/dpaa2: not in enabled drivers build config 00:03:46.724 net/e1000: not in enabled drivers build config 00:03:46.724 net/ena: not in enabled drivers build config 00:03:46.724 net/enetc: not in enabled drivers build config 00:03:46.724 net/enetfec: not in enabled drivers build config 00:03:46.724 net/enic: not in enabled drivers build config 00:03:46.724 net/failsafe: not in enabled drivers build config 00:03:46.724 net/fm10k: not in enabled drivers build config 00:03:46.724 net/gve: not in enabled drivers build config 00:03:46.724 net/hinic: not in enabled drivers build config 00:03:46.724 net/hns3: not in enabled drivers build config 00:03:46.724 net/i40e: not in enabled drivers build config 00:03:46.724 net/iavf: not in enabled drivers build config 00:03:46.724 net/ice: not in enabled drivers build config 00:03:46.724 net/idpf: not in enabled drivers build config 00:03:46.724 net/igc: not in enabled drivers build config 00:03:46.724 net/ionic: not in enabled drivers build config 00:03:46.724 net/ipn3ke: not in enabled drivers build config 00:03:46.724 net/ixgbe: not in enabled drivers build config 00:03:46.724 net/mana: not in enabled drivers build config 00:03:46.724 net/memif: not in enabled drivers build config 00:03:46.724 net/mlx4: not in enabled drivers build config 00:03:46.724 net/mlx5: not in enabled drivers build config 00:03:46.724 net/mvneta: not in enabled drivers build config 00:03:46.724 net/mvpp2: not in enabled drivers build config 00:03:46.724 net/netvsc: not in enabled drivers build config 00:03:46.724 net/nfb: not in enabled drivers build config 00:03:46.724 net/nfp: not in enabled drivers build config 00:03:46.724 net/ngbe: not in enabled drivers build config 00:03:46.724 net/null: not in enabled drivers build config 00:03:46.724 net/octeontx: not in enabled drivers build config 00:03:46.724 net/octeon_ep: not in enabled drivers build config 00:03:46.724 net/pcap: not in enabled drivers build config 00:03:46.724 net/pfe: not in enabled drivers build config 00:03:46.724 net/qede: not in enabled drivers build config 00:03:46.724 net/ring: not in enabled drivers build config 00:03:46.724 net/sfc: not in enabled drivers build config 00:03:46.724 net/softnic: not in enabled drivers build config 00:03:46.724 net/tap: not in enabled drivers build config 00:03:46.724 net/thunderx: not in enabled drivers build config 00:03:46.724 net/txgbe: not in enabled drivers build config 00:03:46.724 net/vdev_netvsc: not in enabled drivers build config 00:03:46.724 net/vhost: not in enabled drivers build config 00:03:46.724 net/virtio: not in enabled drivers build config 00:03:46.724 net/vmxnet3: not in enabled drivers build config 00:03:46.724 raw/*: missing internal dependency, "rawdev" 00:03:46.724 crypto/armv8: not in enabled drivers build config 00:03:46.724 crypto/bcmfs: not in enabled drivers build config 00:03:46.724 crypto/caam_jr: not in enabled drivers build config 00:03:46.724 crypto/ccp: not in enabled drivers build config 00:03:46.724 crypto/cnxk: not in enabled drivers build config 00:03:46.724 crypto/dpaa_sec: not in enabled drivers build config 00:03:46.724 crypto/dpaa2_sec: not in enabled drivers build config 00:03:46.724 crypto/ipsec_mb: not in enabled drivers build config 00:03:46.724 crypto/mlx5: not in enabled drivers build config 00:03:46.724 crypto/mvsam: not in enabled drivers build config 00:03:46.724 crypto/nitrox: not in enabled drivers build config 00:03:46.724 crypto/null: not in enabled drivers build config 00:03:46.724 crypto/octeontx: not in enabled drivers build config 00:03:46.724 crypto/openssl: not in enabled drivers build config 00:03:46.724 crypto/scheduler: not in enabled drivers build config 00:03:46.724 crypto/uadk: not in enabled drivers build config 00:03:46.724 crypto/virtio: not in enabled drivers build config 00:03:46.724 compress/isal: not in enabled drivers build config 00:03:46.724 compress/mlx5: not in enabled drivers build config 00:03:46.724 compress/nitrox: not in enabled drivers build config 00:03:46.724 compress/octeontx: not in enabled drivers build config 00:03:46.724 compress/zlib: not in enabled drivers build config 00:03:46.724 regex/*: missing internal dependency, "regexdev" 00:03:46.724 ml/*: missing internal dependency, "mldev" 00:03:46.724 vdpa/ifc: not in enabled drivers build config 00:03:46.725 vdpa/mlx5: not in enabled drivers build config 00:03:46.725 vdpa/nfp: not in enabled drivers build config 00:03:46.725 vdpa/sfc: not in enabled drivers build config 00:03:46.725 event/*: missing internal dependency, "eventdev" 00:03:46.725 baseband/*: missing internal dependency, "bbdev" 00:03:46.725 gpu/*: missing internal dependency, "gpudev" 00:03:46.725 00:03:46.725 00:03:46.725 Build targets in project: 85 00:03:46.725 00:03:46.725 DPDK 24.03.0 00:03:46.725 00:03:46.725 User defined options 00:03:46.725 buildtype : debug 00:03:46.725 default_library : shared 00:03:46.725 libdir : lib 00:03:46.725 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:46.725 b_sanitize : address 00:03:46.725 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:46.725 c_link_args : 00:03:46.725 cpu_instruction_set: native 00:03:46.725 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:46.725 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:46.725 enable_docs : false 00:03:46.725 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:46.725 enable_kmods : false 00:03:46.725 max_lcores : 128 00:03:46.725 tests : false 00:03:46.725 00:03:46.725 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:46.725 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:46.725 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:46.725 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:46.725 [3/268] Linking static target lib/librte_kvargs.a 00:03:46.725 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:46.725 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:46.725 [6/268] Linking static target lib/librte_log.a 00:03:46.725 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:46.725 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:46.725 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:46.984 [10/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.984 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:46.984 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:46.984 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:46.984 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:46.984 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:46.984 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:46.984 [17/268] Linking static target lib/librte_telemetry.a 00:03:47.242 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:47.501 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.501 [20/268] Linking target lib/librte_log.so.24.1 00:03:47.501 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:47.501 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:47.501 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:47.759 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:47.759 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:47.759 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:47.759 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:47.759 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:47.759 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:47.759 [30/268] Linking target lib/librte_kvargs.so.24.1 00:03:47.759 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:48.017 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.017 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:48.017 [34/268] Linking target lib/librte_telemetry.so.24.1 00:03:48.017 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:48.017 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:48.276 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:48.276 [38/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:48.276 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:48.276 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:48.276 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:48.276 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:48.276 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:48.534 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:48.534 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:48.534 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:48.534 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:48.794 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:48.794 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:48.794 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:49.053 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:49.053 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:49.053 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:49.053 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:49.053 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:49.053 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:49.311 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:49.312 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:49.312 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:49.570 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:49.570 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:49.570 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:49.570 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:49.570 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:49.570 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:49.829 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:49.829 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:50.088 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:50.088 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:50.088 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:50.088 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:50.347 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:50.347 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:50.347 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:50.347 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:50.347 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:50.347 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:50.605 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:50.605 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:50.605 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:50.605 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:50.864 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:50.864 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:51.122 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:51.122 [85/268] Linking static target lib/librte_ring.a 00:03:51.122 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:51.122 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:51.122 [88/268] Linking static target lib/librte_eal.a 00:03:51.122 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:51.122 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:51.122 [91/268] Linking static target lib/librte_rcu.a 00:03:51.381 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:51.381 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:51.381 [94/268] Linking static target lib/librte_mempool.a 00:03:51.381 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:51.381 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:51.640 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.640 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:51.640 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:51.640 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:51.640 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.899 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:51.899 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:51.899 [104/268] Linking static target lib/librte_mbuf.a 00:03:51.899 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:51.899 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:52.157 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:52.157 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:52.157 [109/268] Linking static target lib/librte_meter.a 00:03:52.416 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:52.416 [111/268] Linking static target lib/librte_net.a 00:03:52.416 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:52.416 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.416 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:52.416 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.416 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:52.674 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:52.674 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.933 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.933 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:53.192 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:53.192 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:53.479 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:53.479 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:53.479 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:53.763 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:53.763 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:53.763 [128/268] Linking static target lib/librte_pci.a 00:03:53.763 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:53.763 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:53.763 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:53.763 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:53.763 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:53.763 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:54.022 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:54.022 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:54.022 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:54.022 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.022 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:54.022 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:54.022 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:54.022 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:54.022 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:54.022 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:54.022 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:54.281 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:54.281 [147/268] Linking static target lib/librte_cmdline.a 00:03:54.281 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:54.540 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:54.540 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:54.798 [151/268] Linking static target lib/librte_timer.a 00:03:54.798 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:54.798 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:54.798 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:55.057 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:55.057 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:55.316 [157/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.316 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:55.316 [159/268] Linking static target lib/librte_compressdev.a 00:03:55.316 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:55.316 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:55.316 [162/268] Linking static target lib/librte_ethdev.a 00:03:55.316 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:55.575 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:55.575 [165/268] Linking static target lib/librte_hash.a 00:03:55.575 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:55.833 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:55.833 [168/268] Linking static target lib/librte_dmadev.a 00:03:55.833 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:55.833 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.833 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:55.833 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:55.833 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:56.400 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.400 [175/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:56.400 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:56.400 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:56.400 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:56.659 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:56.659 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:56.659 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:56.659 [182/268] Linking static target lib/librte_cryptodev.a 00:03:56.659 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.659 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.917 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:56.917 [186/268] Linking static target lib/librte_power.a 00:03:57.176 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:57.176 [188/268] Linking static target lib/librte_reorder.a 00:03:57.176 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:57.176 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:57.176 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:57.434 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:57.434 [193/268] Linking static target lib/librte_security.a 00:03:57.691 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.691 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:58.256 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.257 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.257 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:58.257 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:58.257 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:58.515 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:58.773 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:58.773 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:58.773 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:58.773 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:58.773 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:59.032 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:59.032 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:59.290 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:59.290 [210/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.290 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:59.290 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:59.290 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:59.290 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:59.290 [215/268] Linking static target drivers/librte_bus_vdev.a 00:03:59.548 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:59.548 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:59.548 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:59.548 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:59.548 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:59.548 [221/268] Linking static target drivers/librte_bus_pci.a 00:03:59.806 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.806 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:59.806 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:59.806 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:59.806 [226/268] Linking static target drivers/librte_mempool_ring.a 00:04:00.065 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.000 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:01.933 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.933 [230/268] Linking target lib/librte_eal.so.24.1 00:04:01.933 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:02.205 [232/268] Linking target lib/librte_ring.so.24.1 00:04:02.205 [233/268] Linking target lib/librte_meter.so.24.1 00:04:02.205 [234/268] Linking target lib/librte_timer.so.24.1 00:04:02.205 [235/268] Linking target lib/librte_pci.so.24.1 00:04:02.205 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:02.205 [237/268] Linking target lib/librte_dmadev.so.24.1 00:04:02.205 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:02.205 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:02.205 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:02.205 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:02.205 [242/268] Linking target lib/librte_rcu.so.24.1 00:04:02.205 [243/268] Linking target lib/librte_mempool.so.24.1 00:04:02.463 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:02.463 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:02.463 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:02.463 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:02.463 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:02.463 [249/268] Linking target lib/librte_mbuf.so.24.1 00:04:02.721 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:02.721 [251/268] Linking target lib/librte_cryptodev.so.24.1 00:04:02.721 [252/268] Linking target lib/librte_net.so.24.1 00:04:02.721 [253/268] Linking target lib/librte_compressdev.so.24.1 00:04:02.979 [254/268] Linking target lib/librte_reorder.so.24.1 00:04:02.979 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:02.979 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:02.979 [257/268] Linking target lib/librte_security.so.24.1 00:04:02.979 [258/268] Linking target lib/librte_hash.so.24.1 00:04:02.979 [259/268] Linking target lib/librte_cmdline.so.24.1 00:04:03.238 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:03.522 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.522 [262/268] Linking target lib/librte_ethdev.so.24.1 00:04:03.824 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:03.825 [264/268] Linking target lib/librte_power.so.24.1 00:04:05.726 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:05.726 [266/268] Linking static target lib/librte_vhost.a 00:04:07.628 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.886 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:07.886 INFO: autodetecting backend as ninja 00:04:07.886 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:29.815 CC lib/ut/ut.o 00:04:29.815 CC lib/ut_mock/mock.o 00:04:29.815 CC lib/log/log_deprecated.o 00:04:29.815 CC lib/log/log.o 00:04:29.815 CC lib/log/log_flags.o 00:04:29.815 LIB libspdk_ut.a 00:04:29.815 LIB libspdk_ut_mock.a 00:04:29.815 SO libspdk_ut.so.2.0 00:04:29.815 LIB libspdk_log.a 00:04:29.815 SO libspdk_ut_mock.so.6.0 00:04:29.815 SO libspdk_log.so.7.1 00:04:29.815 SYMLINK libspdk_ut.so 00:04:29.815 SYMLINK libspdk_ut_mock.so 00:04:29.815 SYMLINK libspdk_log.so 00:04:30.075 CC lib/dma/dma.o 00:04:30.075 CC lib/ioat/ioat.o 00:04:30.075 CXX lib/trace_parser/trace.o 00:04:30.075 CC lib/util/base64.o 00:04:30.075 CC lib/util/bit_array.o 00:04:30.075 CC lib/util/cpuset.o 00:04:30.075 CC lib/util/crc16.o 00:04:30.075 CC lib/util/crc32.o 00:04:30.075 CC lib/util/crc32c.o 00:04:30.075 CC lib/vfio_user/host/vfio_user_pci.o 00:04:30.334 CC lib/util/crc32_ieee.o 00:04:30.334 CC lib/util/crc64.o 00:04:30.334 CC lib/util/dif.o 00:04:30.334 LIB libspdk_dma.a 00:04:30.334 CC lib/util/fd.o 00:04:30.334 SO libspdk_dma.so.5.0 00:04:30.334 CC lib/util/fd_group.o 00:04:30.334 LIB libspdk_ioat.a 00:04:30.334 SYMLINK libspdk_dma.so 00:04:30.334 CC lib/util/file.o 00:04:30.334 CC lib/vfio_user/host/vfio_user.o 00:04:30.334 CC lib/util/hexlify.o 00:04:30.334 CC lib/util/iov.o 00:04:30.334 SO libspdk_ioat.so.7.0 00:04:30.652 SYMLINK libspdk_ioat.so 00:04:30.652 CC lib/util/math.o 00:04:30.652 CC lib/util/net.o 00:04:30.652 CC lib/util/pipe.o 00:04:30.652 CC lib/util/strerror_tls.o 00:04:30.652 CC lib/util/string.o 00:04:30.652 CC lib/util/uuid.o 00:04:30.652 LIB libspdk_vfio_user.a 00:04:30.652 CC lib/util/xor.o 00:04:30.652 SO libspdk_vfio_user.so.5.0 00:04:30.652 CC lib/util/zipf.o 00:04:30.652 CC lib/util/md5.o 00:04:30.652 SYMLINK libspdk_vfio_user.so 00:04:31.221 LIB libspdk_util.a 00:04:31.221 LIB libspdk_trace_parser.a 00:04:31.221 SO libspdk_trace_parser.so.6.0 00:04:31.221 SO libspdk_util.so.10.1 00:04:31.480 SYMLINK libspdk_trace_parser.so 00:04:31.480 SYMLINK libspdk_util.so 00:04:31.738 CC lib/env_dpdk/memory.o 00:04:31.738 CC lib/env_dpdk/env.o 00:04:31.738 CC lib/env_dpdk/init.o 00:04:31.738 CC lib/env_dpdk/threads.o 00:04:31.738 CC lib/env_dpdk/pci.o 00:04:31.738 CC lib/idxd/idxd.o 00:04:31.738 CC lib/conf/conf.o 00:04:31.738 CC lib/vmd/vmd.o 00:04:31.738 CC lib/rdma_utils/rdma_utils.o 00:04:31.738 CC lib/json/json_parse.o 00:04:31.738 CC lib/env_dpdk/pci_ioat.o 00:04:31.995 LIB libspdk_conf.a 00:04:31.995 CC lib/env_dpdk/pci_virtio.o 00:04:31.995 SO libspdk_conf.so.6.0 00:04:31.995 SYMLINK libspdk_conf.so 00:04:31.995 CC lib/idxd/idxd_user.o 00:04:31.995 CC lib/env_dpdk/pci_vmd.o 00:04:31.995 LIB libspdk_rdma_utils.a 00:04:31.995 CC lib/vmd/led.o 00:04:32.252 SO libspdk_rdma_utils.so.1.0 00:04:32.252 CC lib/json/json_util.o 00:04:32.252 SYMLINK libspdk_rdma_utils.so 00:04:32.252 CC lib/json/json_write.o 00:04:32.252 CC lib/env_dpdk/pci_idxd.o 00:04:32.252 CC lib/env_dpdk/pci_event.o 00:04:32.252 CC lib/env_dpdk/sigbus_handler.o 00:04:32.511 CC lib/env_dpdk/pci_dpdk.o 00:04:32.511 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:32.511 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:32.511 CC lib/idxd/idxd_kernel.o 00:04:32.511 LIB libspdk_json.a 00:04:32.511 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:32.511 CC lib/rdma_provider/common.o 00:04:32.511 SO libspdk_json.so.6.0 00:04:32.511 LIB libspdk_vmd.a 00:04:32.511 SO libspdk_vmd.so.6.0 00:04:32.770 SYMLINK libspdk_json.so 00:04:32.770 SYMLINK libspdk_vmd.so 00:04:32.770 LIB libspdk_idxd.a 00:04:32.770 SO libspdk_idxd.so.12.1 00:04:32.770 LIB libspdk_rdma_provider.a 00:04:32.770 SO libspdk_rdma_provider.so.7.0 00:04:32.770 SYMLINK libspdk_idxd.so 00:04:33.029 SYMLINK libspdk_rdma_provider.so 00:04:33.029 CC lib/jsonrpc/jsonrpc_server.o 00:04:33.029 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:33.029 CC lib/jsonrpc/jsonrpc_client.o 00:04:33.029 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:33.288 LIB libspdk_jsonrpc.a 00:04:33.288 SO libspdk_jsonrpc.so.6.0 00:04:33.548 SYMLINK libspdk_jsonrpc.so 00:04:33.548 LIB libspdk_env_dpdk.a 00:04:33.808 SO libspdk_env_dpdk.so.15.1 00:04:33.808 CC lib/rpc/rpc.o 00:04:33.808 SYMLINK libspdk_env_dpdk.so 00:04:34.067 LIB libspdk_rpc.a 00:04:34.067 SO libspdk_rpc.so.6.0 00:04:34.326 SYMLINK libspdk_rpc.so 00:04:34.584 CC lib/keyring/keyring.o 00:04:34.584 CC lib/keyring/keyring_rpc.o 00:04:34.584 CC lib/notify/notify.o 00:04:34.584 CC lib/notify/notify_rpc.o 00:04:34.584 CC lib/trace/trace.o 00:04:34.584 CC lib/trace/trace_flags.o 00:04:34.584 CC lib/trace/trace_rpc.o 00:04:34.842 LIB libspdk_notify.a 00:04:34.842 SO libspdk_notify.so.6.0 00:04:34.842 LIB libspdk_keyring.a 00:04:34.842 SO libspdk_keyring.so.2.0 00:04:34.842 SYMLINK libspdk_notify.so 00:04:34.842 LIB libspdk_trace.a 00:04:34.842 SYMLINK libspdk_keyring.so 00:04:34.842 SO libspdk_trace.so.11.0 00:04:35.100 SYMLINK libspdk_trace.so 00:04:35.359 CC lib/thread/thread.o 00:04:35.359 CC lib/thread/iobuf.o 00:04:35.359 CC lib/sock/sock_rpc.o 00:04:35.359 CC lib/sock/sock.o 00:04:35.926 LIB libspdk_sock.a 00:04:35.926 SO libspdk_sock.so.10.0 00:04:35.926 SYMLINK libspdk_sock.so 00:04:36.493 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:36.493 CC lib/nvme/nvme_ctrlr.o 00:04:36.493 CC lib/nvme/nvme_fabric.o 00:04:36.493 CC lib/nvme/nvme_pcie_common.o 00:04:36.493 CC lib/nvme/nvme_ns.o 00:04:36.493 CC lib/nvme/nvme_ns_cmd.o 00:04:36.493 CC lib/nvme/nvme_pcie.o 00:04:36.493 CC lib/nvme/nvme_qpair.o 00:04:36.493 CC lib/nvme/nvme.o 00:04:37.093 CC lib/nvme/nvme_quirks.o 00:04:37.351 CC lib/nvme/nvme_transport.o 00:04:37.351 CC lib/nvme/nvme_discovery.o 00:04:37.351 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:37.351 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:37.610 CC lib/nvme/nvme_tcp.o 00:04:37.610 CC lib/nvme/nvme_opal.o 00:04:37.610 LIB libspdk_thread.a 00:04:37.868 SO libspdk_thread.so.11.0 00:04:37.868 CC lib/nvme/nvme_io_msg.o 00:04:37.868 SYMLINK libspdk_thread.so 00:04:38.125 CC lib/accel/accel.o 00:04:38.125 CC lib/blob/blobstore.o 00:04:38.125 CC lib/nvme/nvme_poll_group.o 00:04:38.125 CC lib/init/json_config.o 00:04:38.125 CC lib/init/subsystem.o 00:04:38.125 CC lib/virtio/virtio.o 00:04:38.125 CC lib/virtio/virtio_vhost_user.o 00:04:38.381 CC lib/virtio/virtio_vfio_user.o 00:04:38.381 CC lib/init/subsystem_rpc.o 00:04:38.638 CC lib/virtio/virtio_pci.o 00:04:38.638 CC lib/init/rpc.o 00:04:38.638 CC lib/nvme/nvme_zns.o 00:04:38.638 CC lib/nvme/nvme_stubs.o 00:04:38.638 CC lib/fsdev/fsdev.o 00:04:38.896 LIB libspdk_init.a 00:04:38.896 SO libspdk_init.so.6.0 00:04:38.896 LIB libspdk_virtio.a 00:04:38.896 SO libspdk_virtio.so.7.0 00:04:38.896 SYMLINK libspdk_init.so 00:04:38.896 CC lib/fsdev/fsdev_io.o 00:04:38.896 SYMLINK libspdk_virtio.so 00:04:38.896 CC lib/nvme/nvme_auth.o 00:04:39.153 CC lib/event/app.o 00:04:39.153 CC lib/event/reactor.o 00:04:39.153 CC lib/event/log_rpc.o 00:04:39.153 CC lib/event/app_rpc.o 00:04:39.410 CC lib/event/scheduler_static.o 00:04:39.410 CC lib/nvme/nvme_cuse.o 00:04:39.410 CC lib/fsdev/fsdev_rpc.o 00:04:39.667 CC lib/nvme/nvme_rdma.o 00:04:39.667 CC lib/blob/request.o 00:04:39.667 CC lib/accel/accel_rpc.o 00:04:39.667 CC lib/blob/zeroes.o 00:04:39.667 LIB libspdk_fsdev.a 00:04:39.667 CC lib/blob/blob_bs_dev.o 00:04:39.667 SO libspdk_fsdev.so.2.0 00:04:39.667 LIB libspdk_event.a 00:04:39.924 CC lib/accel/accel_sw.o 00:04:39.924 SO libspdk_event.so.14.0 00:04:39.924 SYMLINK libspdk_fsdev.so 00:04:39.924 SYMLINK libspdk_event.so 00:04:40.182 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:40.182 LIB libspdk_accel.a 00:04:40.182 SO libspdk_accel.so.16.0 00:04:40.441 SYMLINK libspdk_accel.so 00:04:40.699 CC lib/bdev/bdev.o 00:04:40.699 CC lib/bdev/bdev_rpc.o 00:04:40.699 CC lib/bdev/bdev_zone.o 00:04:40.699 CC lib/bdev/part.o 00:04:40.699 CC lib/bdev/scsi_nvme.o 00:04:40.957 LIB libspdk_fuse_dispatcher.a 00:04:40.957 SO libspdk_fuse_dispatcher.so.1.0 00:04:40.957 SYMLINK libspdk_fuse_dispatcher.so 00:04:41.216 LIB libspdk_nvme.a 00:04:41.475 SO libspdk_nvme.so.15.0 00:04:41.735 SYMLINK libspdk_nvme.so 00:04:42.299 LIB libspdk_blob.a 00:04:42.299 SO libspdk_blob.so.12.0 00:04:42.299 SYMLINK libspdk_blob.so 00:04:42.865 CC lib/blobfs/blobfs.o 00:04:42.865 CC lib/blobfs/tree.o 00:04:42.865 CC lib/lvol/lvol.o 00:04:43.890 LIB libspdk_blobfs.a 00:04:43.890 SO libspdk_blobfs.so.11.0 00:04:43.890 LIB libspdk_lvol.a 00:04:43.890 SYMLINK libspdk_blobfs.so 00:04:43.890 SO libspdk_lvol.so.11.0 00:04:43.890 SYMLINK libspdk_lvol.so 00:04:44.162 LIB libspdk_bdev.a 00:04:44.162 SO libspdk_bdev.so.17.0 00:04:44.421 SYMLINK libspdk_bdev.so 00:04:44.680 CC lib/nvmf/ctrlr_discovery.o 00:04:44.680 CC lib/nvmf/ctrlr_bdev.o 00:04:44.680 CC lib/nvmf/ctrlr.o 00:04:44.680 CC lib/nvmf/subsystem.o 00:04:44.680 CC lib/nvmf/nvmf.o 00:04:44.680 CC lib/nvmf/nvmf_rpc.o 00:04:44.680 CC lib/ftl/ftl_core.o 00:04:44.680 CC lib/scsi/dev.o 00:04:44.680 CC lib/nbd/nbd.o 00:04:44.680 CC lib/ublk/ublk.o 00:04:44.940 CC lib/scsi/lun.o 00:04:44.940 CC lib/ftl/ftl_init.o 00:04:45.199 CC lib/nbd/nbd_rpc.o 00:04:45.199 CC lib/ftl/ftl_layout.o 00:04:45.199 CC lib/scsi/port.o 00:04:45.199 CC lib/ftl/ftl_debug.o 00:04:45.199 LIB libspdk_nbd.a 00:04:45.459 CC lib/scsi/scsi.o 00:04:45.459 SO libspdk_nbd.so.7.0 00:04:45.459 CC lib/ublk/ublk_rpc.o 00:04:45.459 CC lib/scsi/scsi_bdev.o 00:04:45.459 SYMLINK libspdk_nbd.so 00:04:45.459 CC lib/scsi/scsi_pr.o 00:04:45.459 CC lib/scsi/scsi_rpc.o 00:04:45.459 CC lib/ftl/ftl_io.o 00:04:45.459 CC lib/ftl/ftl_sb.o 00:04:45.459 LIB libspdk_ublk.a 00:04:45.719 SO libspdk_ublk.so.3.0 00:04:45.719 CC lib/nvmf/transport.o 00:04:45.719 CC lib/scsi/task.o 00:04:45.719 CC lib/nvmf/tcp.o 00:04:45.719 SYMLINK libspdk_ublk.so 00:04:45.719 CC lib/ftl/ftl_l2p.o 00:04:45.719 CC lib/nvmf/stubs.o 00:04:45.719 CC lib/ftl/ftl_l2p_flat.o 00:04:45.719 CC lib/ftl/ftl_nv_cache.o 00:04:45.976 CC lib/ftl/ftl_band.o 00:04:45.976 CC lib/ftl/ftl_band_ops.o 00:04:45.976 CC lib/nvmf/mdns_server.o 00:04:45.976 LIB libspdk_scsi.a 00:04:45.976 SO libspdk_scsi.so.9.0 00:04:45.976 CC lib/nvmf/rdma.o 00:04:46.236 SYMLINK libspdk_scsi.so 00:04:46.236 CC lib/nvmf/auth.o 00:04:46.236 CC lib/ftl/ftl_writer.o 00:04:46.236 CC lib/ftl/ftl_rq.o 00:04:46.236 CC lib/iscsi/conn.o 00:04:46.236 CC lib/iscsi/init_grp.o 00:04:46.494 CC lib/iscsi/iscsi.o 00:04:46.494 CC lib/iscsi/param.o 00:04:46.494 CC lib/iscsi/portal_grp.o 00:04:46.494 CC lib/iscsi/tgt_node.o 00:04:46.751 CC lib/iscsi/iscsi_subsystem.o 00:04:46.751 CC lib/iscsi/iscsi_rpc.o 00:04:47.009 CC lib/ftl/ftl_reloc.o 00:04:47.009 CC lib/vhost/vhost.o 00:04:47.009 CC lib/iscsi/task.o 00:04:47.009 CC lib/ftl/ftl_l2p_cache.o 00:04:47.009 CC lib/vhost/vhost_rpc.o 00:04:47.268 CC lib/vhost/vhost_scsi.o 00:04:47.268 CC lib/ftl/ftl_p2l.o 00:04:47.268 CC lib/ftl/ftl_p2l_log.o 00:04:47.268 CC lib/vhost/vhost_blk.o 00:04:47.526 CC lib/vhost/rte_vhost_user.o 00:04:47.526 CC lib/ftl/mngt/ftl_mngt.o 00:04:47.786 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:47.786 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:47.786 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:47.786 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:47.786 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:47.786 LIB libspdk_iscsi.a 00:04:48.046 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:48.046 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:48.046 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:48.046 SO libspdk_iscsi.so.8.0 00:04:48.046 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:48.046 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:48.046 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:48.046 SYMLINK libspdk_iscsi.so 00:04:48.046 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:48.046 CC lib/ftl/utils/ftl_conf.o 00:04:48.305 CC lib/ftl/utils/ftl_md.o 00:04:48.305 CC lib/ftl/utils/ftl_mempool.o 00:04:48.305 CC lib/ftl/utils/ftl_bitmap.o 00:04:48.305 CC lib/ftl/utils/ftl_property.o 00:04:48.305 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:48.305 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:48.305 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:48.563 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:48.563 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:48.563 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:48.563 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:48.563 LIB libspdk_vhost.a 00:04:48.563 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:48.563 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:48.563 LIB libspdk_nvmf.a 00:04:48.563 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:48.563 SO libspdk_vhost.so.8.0 00:04:48.563 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:48.822 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:48.822 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:48.822 SO libspdk_nvmf.so.20.0 00:04:48.822 SYMLINK libspdk_vhost.so 00:04:48.822 CC lib/ftl/base/ftl_base_dev.o 00:04:48.822 CC lib/ftl/base/ftl_base_bdev.o 00:04:48.822 CC lib/ftl/ftl_trace.o 00:04:49.081 SYMLINK libspdk_nvmf.so 00:04:49.082 LIB libspdk_ftl.a 00:04:49.340 SO libspdk_ftl.so.9.0 00:04:49.600 SYMLINK libspdk_ftl.so 00:04:50.167 CC module/env_dpdk/env_dpdk_rpc.o 00:04:50.167 CC module/blob/bdev/blob_bdev.o 00:04:50.167 CC module/keyring/file/keyring.o 00:04:50.167 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:50.167 CC module/accel/ioat/accel_ioat.o 00:04:50.167 CC module/scheduler/gscheduler/gscheduler.o 00:04:50.167 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:50.167 CC module/sock/posix/posix.o 00:04:50.167 CC module/fsdev/aio/fsdev_aio.o 00:04:50.167 CC module/accel/error/accel_error.o 00:04:50.167 LIB libspdk_env_dpdk_rpc.a 00:04:50.167 SO libspdk_env_dpdk_rpc.so.6.0 00:04:50.167 CC module/keyring/file/keyring_rpc.o 00:04:50.167 SYMLINK libspdk_env_dpdk_rpc.so 00:04:50.167 CC module/accel/error/accel_error_rpc.o 00:04:50.167 LIB libspdk_scheduler_dpdk_governor.a 00:04:50.426 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:50.426 LIB libspdk_scheduler_gscheduler.a 00:04:50.426 LIB libspdk_scheduler_dynamic.a 00:04:50.426 SO libspdk_scheduler_gscheduler.so.4.0 00:04:50.426 SO libspdk_scheduler_dynamic.so.4.0 00:04:50.426 CC module/accel/ioat/accel_ioat_rpc.o 00:04:50.426 SYMLINK libspdk_scheduler_gscheduler.so 00:04:50.426 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:50.426 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:50.426 CC module/fsdev/aio/linux_aio_mgr.o 00:04:50.426 LIB libspdk_keyring_file.a 00:04:50.426 SYMLINK libspdk_scheduler_dynamic.so 00:04:50.426 LIB libspdk_accel_error.a 00:04:50.426 LIB libspdk_blob_bdev.a 00:04:50.426 SO libspdk_keyring_file.so.2.0 00:04:50.426 SO libspdk_blob_bdev.so.12.0 00:04:50.426 SO libspdk_accel_error.so.2.0 00:04:50.426 SYMLINK libspdk_keyring_file.so 00:04:50.426 SYMLINK libspdk_blob_bdev.so 00:04:50.426 SYMLINK libspdk_accel_error.so 00:04:50.426 LIB libspdk_accel_ioat.a 00:04:50.426 CC module/keyring/linux/keyring.o 00:04:50.685 CC module/keyring/linux/keyring_rpc.o 00:04:50.685 SO libspdk_accel_ioat.so.6.0 00:04:50.685 CC module/accel/dsa/accel_dsa.o 00:04:50.685 SYMLINK libspdk_accel_ioat.so 00:04:50.685 CC module/accel/iaa/accel_iaa.o 00:04:50.685 CC module/accel/iaa/accel_iaa_rpc.o 00:04:50.685 LIB libspdk_keyring_linux.a 00:04:50.685 SO libspdk_keyring_linux.so.1.0 00:04:50.685 CC module/bdev/delay/vbdev_delay.o 00:04:50.685 CC module/blobfs/bdev/blobfs_bdev.o 00:04:50.685 CC module/bdev/error/vbdev_error.o 00:04:50.944 CC module/bdev/gpt/gpt.o 00:04:50.944 SYMLINK libspdk_keyring_linux.so 00:04:50.944 CC module/bdev/error/vbdev_error_rpc.o 00:04:50.944 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:50.944 CC module/accel/dsa/accel_dsa_rpc.o 00:04:50.944 LIB libspdk_accel_iaa.a 00:04:50.944 SO libspdk_accel_iaa.so.3.0 00:04:50.944 LIB libspdk_fsdev_aio.a 00:04:50.944 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:50.944 SO libspdk_fsdev_aio.so.1.0 00:04:50.944 LIB libspdk_sock_posix.a 00:04:50.944 SYMLINK libspdk_accel_iaa.so 00:04:51.202 LIB libspdk_blobfs_bdev.a 00:04:51.202 LIB libspdk_accel_dsa.a 00:04:51.202 SO libspdk_sock_posix.so.6.0 00:04:51.202 SO libspdk_blobfs_bdev.so.6.0 00:04:51.202 CC module/bdev/gpt/vbdev_gpt.o 00:04:51.202 SYMLINK libspdk_fsdev_aio.so 00:04:51.202 SO libspdk_accel_dsa.so.5.0 00:04:51.202 LIB libspdk_bdev_error.a 00:04:51.202 SO libspdk_bdev_error.so.6.0 00:04:51.202 SYMLINK libspdk_blobfs_bdev.so 00:04:51.202 SYMLINK libspdk_sock_posix.so 00:04:51.202 SYMLINK libspdk_accel_dsa.so 00:04:51.202 SYMLINK libspdk_bdev_error.so 00:04:51.202 CC module/bdev/malloc/bdev_malloc.o 00:04:51.202 CC module/bdev/lvol/vbdev_lvol.o 00:04:51.461 CC module/bdev/null/bdev_null.o 00:04:51.461 LIB libspdk_bdev_delay.a 00:04:51.461 CC module/bdev/nvme/bdev_nvme.o 00:04:51.461 CC module/bdev/passthru/vbdev_passthru.o 00:04:51.462 CC module/bdev/split/vbdev_split.o 00:04:51.462 CC module/bdev/raid/bdev_raid.o 00:04:51.462 SO libspdk_bdev_delay.so.6.0 00:04:51.462 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:51.462 LIB libspdk_bdev_gpt.a 00:04:51.462 SO libspdk_bdev_gpt.so.6.0 00:04:51.462 SYMLINK libspdk_bdev_delay.so 00:04:51.462 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:51.462 SYMLINK libspdk_bdev_gpt.so 00:04:51.462 CC module/bdev/raid/bdev_raid_rpc.o 00:04:51.720 CC module/bdev/null/bdev_null_rpc.o 00:04:51.720 CC module/bdev/split/vbdev_split_rpc.o 00:04:51.720 CC module/bdev/raid/bdev_raid_sb.o 00:04:51.720 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:51.720 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:51.720 CC module/bdev/raid/raid0.o 00:04:51.720 LIB libspdk_bdev_zone_block.a 00:04:51.720 SO libspdk_bdev_zone_block.so.6.0 00:04:51.720 LIB libspdk_bdev_null.a 00:04:52.005 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:52.005 LIB libspdk_bdev_split.a 00:04:52.005 SO libspdk_bdev_null.so.6.0 00:04:52.005 SO libspdk_bdev_split.so.6.0 00:04:52.005 SYMLINK libspdk_bdev_zone_block.so 00:04:52.005 LIB libspdk_bdev_malloc.a 00:04:52.005 LIB libspdk_bdev_passthru.a 00:04:52.005 CC module/bdev/raid/raid1.o 00:04:52.005 SO libspdk_bdev_malloc.so.6.0 00:04:52.005 SYMLINK libspdk_bdev_null.so 00:04:52.005 SYMLINK libspdk_bdev_split.so 00:04:52.005 SO libspdk_bdev_passthru.so.6.0 00:04:52.005 CC module/bdev/raid/concat.o 00:04:52.005 SYMLINK libspdk_bdev_passthru.so 00:04:52.005 SYMLINK libspdk_bdev_malloc.so 00:04:52.005 CC module/bdev/raid/raid5f.o 00:04:52.285 CC module/bdev/aio/bdev_aio.o 00:04:52.285 CC module/bdev/ftl/bdev_ftl.o 00:04:52.285 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:52.285 CC module/bdev/iscsi/bdev_iscsi.o 00:04:52.285 CC module/bdev/aio/bdev_aio_rpc.o 00:04:52.285 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:52.285 LIB libspdk_bdev_lvol.a 00:04:52.285 SO libspdk_bdev_lvol.so.6.0 00:04:52.543 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:52.543 SYMLINK libspdk_bdev_lvol.so 00:04:52.543 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:52.543 CC module/bdev/nvme/nvme_rpc.o 00:04:52.543 LIB libspdk_bdev_ftl.a 00:04:52.543 SO libspdk_bdev_ftl.so.6.0 00:04:52.543 LIB libspdk_bdev_aio.a 00:04:52.543 SO libspdk_bdev_aio.so.6.0 00:04:52.543 SYMLINK libspdk_bdev_ftl.so 00:04:52.543 CC module/bdev/nvme/bdev_mdns_client.o 00:04:52.543 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:52.802 CC module/bdev/nvme/vbdev_opal.o 00:04:52.802 SYMLINK libspdk_bdev_aio.so 00:04:52.802 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:52.802 LIB libspdk_bdev_raid.a 00:04:52.802 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:52.802 SO libspdk_bdev_raid.so.6.0 00:04:52.802 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:52.802 LIB libspdk_bdev_iscsi.a 00:04:52.802 SO libspdk_bdev_iscsi.so.6.0 00:04:53.061 SYMLINK libspdk_bdev_raid.so 00:04:53.061 SYMLINK libspdk_bdev_iscsi.so 00:04:53.061 LIB libspdk_bdev_virtio.a 00:04:53.061 SO libspdk_bdev_virtio.so.6.0 00:04:53.319 SYMLINK libspdk_bdev_virtio.so 00:04:54.701 LIB libspdk_bdev_nvme.a 00:04:54.701 SO libspdk_bdev_nvme.so.7.1 00:04:54.701 SYMLINK libspdk_bdev_nvme.so 00:04:55.269 CC module/event/subsystems/keyring/keyring.o 00:04:55.269 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:55.269 CC module/event/subsystems/fsdev/fsdev.o 00:04:55.527 CC module/event/subsystems/vmd/vmd.o 00:04:55.527 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:55.528 CC module/event/subsystems/scheduler/scheduler.o 00:04:55.528 CC module/event/subsystems/sock/sock.o 00:04:55.528 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:55.528 CC module/event/subsystems/iobuf/iobuf.o 00:04:55.528 LIB libspdk_event_vhost_blk.a 00:04:55.528 LIB libspdk_event_vmd.a 00:04:55.528 LIB libspdk_event_sock.a 00:04:55.528 SO libspdk_event_vhost_blk.so.3.0 00:04:55.528 LIB libspdk_event_keyring.a 00:04:55.528 SO libspdk_event_sock.so.5.0 00:04:55.528 LIB libspdk_event_fsdev.a 00:04:55.528 LIB libspdk_event_iobuf.a 00:04:55.528 LIB libspdk_event_scheduler.a 00:04:55.528 SO libspdk_event_vmd.so.6.0 00:04:55.528 SO libspdk_event_keyring.so.1.0 00:04:55.528 SO libspdk_event_fsdev.so.1.0 00:04:55.528 SYMLINK libspdk_event_vhost_blk.so 00:04:55.528 SO libspdk_event_iobuf.so.3.0 00:04:55.528 SO libspdk_event_scheduler.so.4.0 00:04:55.528 SYMLINK libspdk_event_sock.so 00:04:55.528 SYMLINK libspdk_event_keyring.so 00:04:55.528 SYMLINK libspdk_event_vmd.so 00:04:55.528 SYMLINK libspdk_event_fsdev.so 00:04:55.786 SYMLINK libspdk_event_iobuf.so 00:04:55.786 SYMLINK libspdk_event_scheduler.so 00:04:56.044 CC module/event/subsystems/accel/accel.o 00:04:56.302 LIB libspdk_event_accel.a 00:04:56.302 SO libspdk_event_accel.so.6.0 00:04:56.302 SYMLINK libspdk_event_accel.so 00:04:56.562 CC module/event/subsystems/bdev/bdev.o 00:04:56.821 LIB libspdk_event_bdev.a 00:04:56.821 SO libspdk_event_bdev.so.6.0 00:04:56.821 SYMLINK libspdk_event_bdev.so 00:04:57.079 CC module/event/subsystems/ublk/ublk.o 00:04:57.079 CC module/event/subsystems/nbd/nbd.o 00:04:57.079 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:57.079 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:57.079 CC module/event/subsystems/scsi/scsi.o 00:04:57.338 LIB libspdk_event_nbd.a 00:04:57.338 LIB libspdk_event_ublk.a 00:04:57.338 LIB libspdk_event_scsi.a 00:04:57.338 SO libspdk_event_nbd.so.6.0 00:04:57.338 SO libspdk_event_ublk.so.3.0 00:04:57.338 SO libspdk_event_scsi.so.6.0 00:04:57.338 SYMLINK libspdk_event_nbd.so 00:04:57.338 SYMLINK libspdk_event_scsi.so 00:04:57.338 LIB libspdk_event_nvmf.a 00:04:57.338 SYMLINK libspdk_event_ublk.so 00:04:57.596 SO libspdk_event_nvmf.so.6.0 00:04:57.596 SYMLINK libspdk_event_nvmf.so 00:04:57.859 CC module/event/subsystems/iscsi/iscsi.o 00:04:57.859 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:57.859 LIB libspdk_event_vhost_scsi.a 00:04:58.121 LIB libspdk_event_iscsi.a 00:04:58.121 SO libspdk_event_vhost_scsi.so.3.0 00:04:58.121 SO libspdk_event_iscsi.so.6.0 00:04:58.121 SYMLINK libspdk_event_vhost_scsi.so 00:04:58.121 SYMLINK libspdk_event_iscsi.so 00:04:58.379 SO libspdk.so.6.0 00:04:58.379 SYMLINK libspdk.so 00:04:58.637 CC test/rpc_client/rpc_client_test.o 00:04:58.637 TEST_HEADER include/spdk/accel.h 00:04:58.637 TEST_HEADER include/spdk/accel_module.h 00:04:58.637 TEST_HEADER include/spdk/assert.h 00:04:58.637 TEST_HEADER include/spdk/barrier.h 00:04:58.637 TEST_HEADER include/spdk/base64.h 00:04:58.637 CXX app/trace/trace.o 00:04:58.637 TEST_HEADER include/spdk/bdev.h 00:04:58.637 TEST_HEADER include/spdk/bdev_module.h 00:04:58.637 TEST_HEADER include/spdk/bdev_zone.h 00:04:58.637 TEST_HEADER include/spdk/bit_array.h 00:04:58.637 TEST_HEADER include/spdk/bit_pool.h 00:04:58.637 TEST_HEADER include/spdk/blob_bdev.h 00:04:58.637 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:58.637 TEST_HEADER include/spdk/blobfs.h 00:04:58.637 TEST_HEADER include/spdk/blob.h 00:04:58.637 TEST_HEADER include/spdk/conf.h 00:04:58.637 TEST_HEADER include/spdk/config.h 00:04:58.637 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:58.637 TEST_HEADER include/spdk/cpuset.h 00:04:58.637 TEST_HEADER include/spdk/crc16.h 00:04:58.637 TEST_HEADER include/spdk/crc32.h 00:04:58.637 TEST_HEADER include/spdk/crc64.h 00:04:58.637 TEST_HEADER include/spdk/dif.h 00:04:58.637 TEST_HEADER include/spdk/dma.h 00:04:58.637 TEST_HEADER include/spdk/endian.h 00:04:58.637 TEST_HEADER include/spdk/env_dpdk.h 00:04:58.637 TEST_HEADER include/spdk/env.h 00:04:58.637 TEST_HEADER include/spdk/event.h 00:04:58.637 TEST_HEADER include/spdk/fd_group.h 00:04:58.637 TEST_HEADER include/spdk/fd.h 00:04:58.637 TEST_HEADER include/spdk/file.h 00:04:58.638 TEST_HEADER include/spdk/fsdev.h 00:04:58.638 TEST_HEADER include/spdk/fsdev_module.h 00:04:58.638 TEST_HEADER include/spdk/ftl.h 00:04:58.638 CC examples/util/zipf/zipf.o 00:04:58.638 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:58.638 CC examples/ioat/perf/perf.o 00:04:58.638 TEST_HEADER include/spdk/gpt_spec.h 00:04:58.638 TEST_HEADER include/spdk/hexlify.h 00:04:58.638 TEST_HEADER include/spdk/histogram_data.h 00:04:58.638 TEST_HEADER include/spdk/idxd.h 00:04:58.638 TEST_HEADER include/spdk/idxd_spec.h 00:04:58.638 TEST_HEADER include/spdk/init.h 00:04:58.638 TEST_HEADER include/spdk/ioat.h 00:04:58.638 TEST_HEADER include/spdk/ioat_spec.h 00:04:58.638 CC test/thread/poller_perf/poller_perf.o 00:04:58.638 TEST_HEADER include/spdk/iscsi_spec.h 00:04:58.638 TEST_HEADER include/spdk/json.h 00:04:58.638 TEST_HEADER include/spdk/jsonrpc.h 00:04:58.638 TEST_HEADER include/spdk/keyring.h 00:04:58.638 TEST_HEADER include/spdk/keyring_module.h 00:04:58.638 TEST_HEADER include/spdk/likely.h 00:04:58.638 TEST_HEADER include/spdk/log.h 00:04:58.638 TEST_HEADER include/spdk/lvol.h 00:04:58.638 CC test/app/bdev_svc/bdev_svc.o 00:04:58.638 TEST_HEADER include/spdk/md5.h 00:04:58.638 TEST_HEADER include/spdk/memory.h 00:04:58.638 TEST_HEADER include/spdk/mmio.h 00:04:58.638 TEST_HEADER include/spdk/nbd.h 00:04:58.638 TEST_HEADER include/spdk/net.h 00:04:58.638 TEST_HEADER include/spdk/notify.h 00:04:58.958 TEST_HEADER include/spdk/nvme.h 00:04:58.958 TEST_HEADER include/spdk/nvme_intel.h 00:04:58.958 CC test/dma/test_dma/test_dma.o 00:04:58.958 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:58.958 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:58.958 TEST_HEADER include/spdk/nvme_spec.h 00:04:58.958 TEST_HEADER include/spdk/nvme_zns.h 00:04:58.958 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:58.958 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:58.958 TEST_HEADER include/spdk/nvmf.h 00:04:58.958 TEST_HEADER include/spdk/nvmf_spec.h 00:04:58.958 TEST_HEADER include/spdk/nvmf_transport.h 00:04:58.958 TEST_HEADER include/spdk/opal.h 00:04:58.958 CC test/env/mem_callbacks/mem_callbacks.o 00:04:58.958 TEST_HEADER include/spdk/opal_spec.h 00:04:58.958 TEST_HEADER include/spdk/pci_ids.h 00:04:58.958 TEST_HEADER include/spdk/pipe.h 00:04:58.958 TEST_HEADER include/spdk/queue.h 00:04:58.958 TEST_HEADER include/spdk/reduce.h 00:04:58.958 TEST_HEADER include/spdk/rpc.h 00:04:58.958 LINK rpc_client_test 00:04:58.958 TEST_HEADER include/spdk/scheduler.h 00:04:58.958 TEST_HEADER include/spdk/scsi.h 00:04:58.958 TEST_HEADER include/spdk/scsi_spec.h 00:04:58.958 TEST_HEADER include/spdk/sock.h 00:04:58.958 TEST_HEADER include/spdk/stdinc.h 00:04:58.958 TEST_HEADER include/spdk/string.h 00:04:58.958 TEST_HEADER include/spdk/thread.h 00:04:58.958 TEST_HEADER include/spdk/trace.h 00:04:58.958 TEST_HEADER include/spdk/trace_parser.h 00:04:58.958 TEST_HEADER include/spdk/tree.h 00:04:58.958 TEST_HEADER include/spdk/ublk.h 00:04:58.958 TEST_HEADER include/spdk/util.h 00:04:58.958 TEST_HEADER include/spdk/uuid.h 00:04:58.958 TEST_HEADER include/spdk/version.h 00:04:58.958 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:58.958 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:58.958 LINK zipf 00:04:58.958 TEST_HEADER include/spdk/vhost.h 00:04:58.958 TEST_HEADER include/spdk/vmd.h 00:04:58.958 TEST_HEADER include/spdk/xor.h 00:04:58.958 LINK interrupt_tgt 00:04:58.958 TEST_HEADER include/spdk/zipf.h 00:04:58.958 CXX test/cpp_headers/accel.o 00:04:58.958 LINK poller_perf 00:04:58.958 LINK bdev_svc 00:04:58.958 LINK ioat_perf 00:04:58.958 CXX test/cpp_headers/accel_module.o 00:04:58.958 CXX test/cpp_headers/assert.o 00:04:59.215 LINK spdk_trace 00:04:59.215 CXX test/cpp_headers/barrier.o 00:04:59.215 CXX test/cpp_headers/base64.o 00:04:59.215 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:59.215 CC examples/ioat/verify/verify.o 00:04:59.215 CC examples/sock/hello_world/hello_sock.o 00:04:59.215 CC examples/thread/thread/thread_ex.o 00:04:59.472 CC app/trace_record/trace_record.o 00:04:59.472 LINK test_dma 00:04:59.472 LINK mem_callbacks 00:04:59.472 CXX test/cpp_headers/bdev.o 00:04:59.472 CC examples/vmd/lsvmd/lsvmd.o 00:04:59.472 LINK verify 00:04:59.472 LINK thread 00:04:59.472 LINK lsvmd 00:04:59.472 LINK hello_sock 00:04:59.472 CC examples/idxd/perf/perf.o 00:04:59.731 CXX test/cpp_headers/bdev_module.o 00:04:59.731 CXX test/cpp_headers/bdev_zone.o 00:04:59.731 CC test/env/vtophys/vtophys.o 00:04:59.731 LINK spdk_trace_record 00:04:59.731 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:59.731 LINK vtophys 00:04:59.731 CC test/env/memory/memory_ut.o 00:04:59.731 CC examples/vmd/led/led.o 00:04:59.731 CXX test/cpp_headers/bit_array.o 00:04:59.731 LINK nvme_fuzz 00:05:00.036 CC test/env/pci/pci_ut.o 00:05:00.036 LINK env_dpdk_post_init 00:05:00.036 CC app/nvmf_tgt/nvmf_main.o 00:05:00.036 LINK idxd_perf 00:05:00.036 CC examples/nvme/hello_world/hello_world.o 00:05:00.036 CC examples/nvme/reconnect/reconnect.o 00:05:00.036 LINK led 00:05:00.036 CXX test/cpp_headers/bit_pool.o 00:05:00.036 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:00.036 LINK nvmf_tgt 00:05:00.294 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:00.294 CC examples/nvme/arbitration/arbitration.o 00:05:00.294 LINK hello_world 00:05:00.294 CXX test/cpp_headers/blob_bdev.o 00:05:00.294 CC examples/nvme/hotplug/hotplug.o 00:05:00.294 LINK pci_ut 00:05:00.552 CXX test/cpp_headers/blobfs_bdev.o 00:05:00.552 LINK reconnect 00:05:00.552 CC app/iscsi_tgt/iscsi_tgt.o 00:05:00.552 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:00.552 LINK hotplug 00:05:00.552 LINK arbitration 00:05:00.552 CXX test/cpp_headers/blobfs.o 00:05:00.552 CXX test/cpp_headers/blob.o 00:05:00.812 CC test/app/histogram_perf/histogram_perf.o 00:05:00.812 LINK cmb_copy 00:05:00.812 LINK iscsi_tgt 00:05:00.812 LINK nvme_manage 00:05:00.812 CXX test/cpp_headers/conf.o 00:05:00.812 LINK histogram_perf 00:05:00.812 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:01.071 CC examples/accel/perf/accel_perf.o 00:05:01.071 CC examples/nvme/abort/abort.o 00:05:01.071 CXX test/cpp_headers/config.o 00:05:01.071 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:01.071 CC examples/blob/hello_world/hello_blob.o 00:05:01.071 CXX test/cpp_headers/cpuset.o 00:05:01.071 CC app/spdk_tgt/spdk_tgt.o 00:05:01.071 CC test/app/jsoncat/jsoncat.o 00:05:01.071 LINK memory_ut 00:05:01.329 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:01.329 CXX test/cpp_headers/crc16.o 00:05:01.329 LINK hello_fsdev 00:05:01.329 LINK jsoncat 00:05:01.329 LINK hello_blob 00:05:01.329 LINK spdk_tgt 00:05:01.329 LINK abort 00:05:01.329 CXX test/cpp_headers/crc32.o 00:05:01.329 CC test/app/stub/stub.o 00:05:01.586 LINK accel_perf 00:05:01.586 CXX test/cpp_headers/crc64.o 00:05:01.586 CXX test/cpp_headers/dif.o 00:05:01.586 CXX test/cpp_headers/dma.o 00:05:01.586 CC examples/blob/cli/blobcli.o 00:05:01.586 CC app/spdk_lspci/spdk_lspci.o 00:05:01.586 LINK stub 00:05:01.586 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:01.586 CXX test/cpp_headers/endian.o 00:05:01.586 LINK vhost_fuzz 00:05:01.586 CXX test/cpp_headers/env_dpdk.o 00:05:01.843 CXX test/cpp_headers/env.o 00:05:01.843 LINK spdk_lspci 00:05:01.843 LINK pmr_persistence 00:05:01.843 CXX test/cpp_headers/event.o 00:05:01.843 CXX test/cpp_headers/fd_group.o 00:05:01.843 CC app/spdk_nvme_perf/perf.o 00:05:01.843 CXX test/cpp_headers/fd.o 00:05:01.843 CXX test/cpp_headers/file.o 00:05:02.100 CXX test/cpp_headers/fsdev.o 00:05:02.100 CC examples/bdev/hello_world/hello_bdev.o 00:05:02.100 CXX test/cpp_headers/fsdev_module.o 00:05:02.100 CXX test/cpp_headers/ftl.o 00:05:02.100 CC examples/bdev/bdevperf/bdevperf.o 00:05:02.100 LINK iscsi_fuzz 00:05:02.100 CC test/event/event_perf/event_perf.o 00:05:02.100 LINK blobcli 00:05:02.100 CC test/event/reactor/reactor.o 00:05:02.100 CXX test/cpp_headers/fuse_dispatcher.o 00:05:02.357 LINK hello_bdev 00:05:02.357 CC test/event/reactor_perf/reactor_perf.o 00:05:02.357 CC test/event/app_repeat/app_repeat.o 00:05:02.357 LINK event_perf 00:05:02.357 CXX test/cpp_headers/gpt_spec.o 00:05:02.357 LINK reactor 00:05:02.357 CXX test/cpp_headers/hexlify.o 00:05:02.357 LINK reactor_perf 00:05:02.357 LINK app_repeat 00:05:02.357 CXX test/cpp_headers/histogram_data.o 00:05:02.357 CXX test/cpp_headers/idxd.o 00:05:02.357 CXX test/cpp_headers/idxd_spec.o 00:05:02.614 CXX test/cpp_headers/init.o 00:05:02.614 CC test/event/scheduler/scheduler.o 00:05:02.614 CXX test/cpp_headers/ioat.o 00:05:02.614 CXX test/cpp_headers/ioat_spec.o 00:05:02.614 CXX test/cpp_headers/iscsi_spec.o 00:05:02.614 CC app/spdk_top/spdk_top.o 00:05:02.614 CC app/spdk_nvme_discover/discovery_aer.o 00:05:02.871 CC app/spdk_nvme_identify/identify.o 00:05:02.871 LINK scheduler 00:05:02.871 CXX test/cpp_headers/json.o 00:05:02.871 CC app/vhost/vhost.o 00:05:02.871 CC app/spdk_dd/spdk_dd.o 00:05:02.871 LINK spdk_nvme_perf 00:05:02.871 CXX test/cpp_headers/jsonrpc.o 00:05:02.871 LINK spdk_nvme_discover 00:05:02.871 CC test/nvme/aer/aer.o 00:05:03.129 LINK vhost 00:05:03.129 LINK bdevperf 00:05:03.129 CXX test/cpp_headers/keyring.o 00:05:03.129 CC test/accel/dif/dif.o 00:05:03.386 CXX test/cpp_headers/keyring_module.o 00:05:03.386 LINK aer 00:05:03.386 LINK spdk_dd 00:05:03.386 CC test/blobfs/mkfs/mkfs.o 00:05:03.386 CC test/lvol/esnap/esnap.o 00:05:03.386 CC app/fio/nvme/fio_plugin.o 00:05:03.386 CXX test/cpp_headers/likely.o 00:05:03.644 CXX test/cpp_headers/log.o 00:05:03.644 CC examples/nvmf/nvmf/nvmf.o 00:05:03.644 LINK mkfs 00:05:03.644 CC test/nvme/reset/reset.o 00:05:03.644 CXX test/cpp_headers/lvol.o 00:05:03.901 CC app/fio/bdev/fio_plugin.o 00:05:03.901 CXX test/cpp_headers/md5.o 00:05:03.901 LINK spdk_nvme_identify 00:05:03.901 LINK spdk_top 00:05:03.901 LINK reset 00:05:03.901 LINK nvmf 00:05:03.901 CC test/nvme/sgl/sgl.o 00:05:04.159 CXX test/cpp_headers/memory.o 00:05:04.159 LINK dif 00:05:04.159 CC test/nvme/e2edp/nvme_dp.o 00:05:04.159 CXX test/cpp_headers/mmio.o 00:05:04.159 CC test/nvme/overhead/overhead.o 00:05:04.159 LINK spdk_nvme 00:05:04.159 CC test/nvme/err_injection/err_injection.o 00:05:04.468 CXX test/cpp_headers/nbd.o 00:05:04.468 LINK sgl 00:05:04.468 CC test/nvme/startup/startup.o 00:05:04.468 CXX test/cpp_headers/net.o 00:05:04.468 LINK spdk_bdev 00:05:04.468 CC test/nvme/reserve/reserve.o 00:05:04.468 LINK nvme_dp 00:05:04.468 LINK err_injection 00:05:04.468 LINK overhead 00:05:04.468 LINK startup 00:05:04.468 CXX test/cpp_headers/notify.o 00:05:04.468 CC test/bdev/bdevio/bdevio.o 00:05:04.468 CXX test/cpp_headers/nvme.o 00:05:04.747 CC test/nvme/simple_copy/simple_copy.o 00:05:04.747 LINK reserve 00:05:04.747 CC test/nvme/connect_stress/connect_stress.o 00:05:04.747 CC test/nvme/boot_partition/boot_partition.o 00:05:04.747 CXX test/cpp_headers/nvme_intel.o 00:05:04.747 CC test/nvme/compliance/nvme_compliance.o 00:05:04.747 CC test/nvme/fused_ordering/fused_ordering.o 00:05:04.747 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:05.006 LINK simple_copy 00:05:05.006 CC test/nvme/fdp/fdp.o 00:05:05.006 LINK connect_stress 00:05:05.006 CXX test/cpp_headers/nvme_ocssd.o 00:05:05.006 LINK boot_partition 00:05:05.006 LINK bdevio 00:05:05.006 LINK fused_ordering 00:05:05.006 LINK doorbell_aers 00:05:05.006 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:05.006 CXX test/cpp_headers/nvme_spec.o 00:05:05.006 CXX test/cpp_headers/nvme_zns.o 00:05:05.265 CC test/nvme/cuse/cuse.o 00:05:05.265 LINK nvme_compliance 00:05:05.265 CXX test/cpp_headers/nvmf_cmd.o 00:05:05.265 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:05.265 CXX test/cpp_headers/nvmf.o 00:05:05.265 CXX test/cpp_headers/nvmf_spec.o 00:05:05.265 CXX test/cpp_headers/nvmf_transport.o 00:05:05.265 CXX test/cpp_headers/opal.o 00:05:05.265 LINK fdp 00:05:05.265 CXX test/cpp_headers/opal_spec.o 00:05:05.525 CXX test/cpp_headers/pci_ids.o 00:05:05.525 CXX test/cpp_headers/pipe.o 00:05:05.525 CXX test/cpp_headers/queue.o 00:05:05.525 CXX test/cpp_headers/reduce.o 00:05:05.525 CXX test/cpp_headers/rpc.o 00:05:05.525 CXX test/cpp_headers/scheduler.o 00:05:05.525 CXX test/cpp_headers/scsi.o 00:05:05.525 CXX test/cpp_headers/scsi_spec.o 00:05:05.525 CXX test/cpp_headers/sock.o 00:05:05.525 CXX test/cpp_headers/stdinc.o 00:05:05.525 CXX test/cpp_headers/string.o 00:05:05.525 CXX test/cpp_headers/thread.o 00:05:05.786 CXX test/cpp_headers/trace.o 00:05:05.786 CXX test/cpp_headers/trace_parser.o 00:05:05.786 CXX test/cpp_headers/tree.o 00:05:05.786 CXX test/cpp_headers/ublk.o 00:05:05.786 CXX test/cpp_headers/util.o 00:05:05.786 CXX test/cpp_headers/uuid.o 00:05:05.786 CXX test/cpp_headers/version.o 00:05:05.786 CXX test/cpp_headers/vfio_user_pci.o 00:05:05.786 CXX test/cpp_headers/vfio_user_spec.o 00:05:05.786 CXX test/cpp_headers/vhost.o 00:05:05.786 CXX test/cpp_headers/vmd.o 00:05:05.786 CXX test/cpp_headers/xor.o 00:05:05.786 CXX test/cpp_headers/zipf.o 00:05:06.725 LINK cuse 00:05:10.016 LINK esnap 00:05:10.275 00:05:10.275 real 1m36.222s 00:05:10.275 user 8m27.102s 00:05:10.275 sys 1m39.459s 00:05:10.275 03:56:03 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:10.275 03:56:03 make -- common/autotest_common.sh@10 -- $ set +x 00:05:10.275 ************************************ 00:05:10.275 END TEST make 00:05:10.275 ************************************ 00:05:10.275 03:56:03 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:10.275 03:56:03 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:10.275 03:56:03 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:10.275 03:56:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.275 03:56:03 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:10.275 03:56:03 -- pm/common@44 -- $ pid=5472 00:05:10.275 03:56:03 -- pm/common@50 -- $ kill -TERM 5472 00:05:10.275 03:56:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.275 03:56:03 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:10.275 03:56:03 -- pm/common@44 -- $ pid=5474 00:05:10.275 03:56:03 -- pm/common@50 -- $ kill -TERM 5474 00:05:10.275 03:56:03 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:10.275 03:56:03 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:10.275 03:56:03 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:10.275 03:56:03 -- common/autotest_common.sh@1711 -- # lcov --version 00:05:10.275 03:56:03 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:10.275 03:56:03 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:10.275 03:56:03 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.276 03:56:03 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.276 03:56:03 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.276 03:56:03 -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.276 03:56:03 -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.276 03:56:03 -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.276 03:56:03 -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.276 03:56:03 -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.276 03:56:03 -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.276 03:56:03 -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.276 03:56:03 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.276 03:56:03 -- scripts/common.sh@344 -- # case "$op" in 00:05:10.276 03:56:03 -- scripts/common.sh@345 -- # : 1 00:05:10.276 03:56:03 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.276 03:56:03 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.276 03:56:03 -- scripts/common.sh@365 -- # decimal 1 00:05:10.276 03:56:03 -- scripts/common.sh@353 -- # local d=1 00:05:10.276 03:56:03 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.276 03:56:03 -- scripts/common.sh@355 -- # echo 1 00:05:10.276 03:56:03 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.276 03:56:03 -- scripts/common.sh@366 -- # decimal 2 00:05:10.276 03:56:03 -- scripts/common.sh@353 -- # local d=2 00:05:10.276 03:56:03 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.276 03:56:03 -- scripts/common.sh@355 -- # echo 2 00:05:10.535 03:56:03 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.535 03:56:03 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.535 03:56:03 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.535 03:56:03 -- scripts/common.sh@368 -- # return 0 00:05:10.535 03:56:03 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.535 03:56:03 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:10.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.535 --rc genhtml_branch_coverage=1 00:05:10.535 --rc genhtml_function_coverage=1 00:05:10.535 --rc genhtml_legend=1 00:05:10.536 --rc geninfo_all_blocks=1 00:05:10.536 --rc geninfo_unexecuted_blocks=1 00:05:10.536 00:05:10.536 ' 00:05:10.536 03:56:03 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:10.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.536 --rc genhtml_branch_coverage=1 00:05:10.536 --rc genhtml_function_coverage=1 00:05:10.536 --rc genhtml_legend=1 00:05:10.536 --rc geninfo_all_blocks=1 00:05:10.536 --rc geninfo_unexecuted_blocks=1 00:05:10.536 00:05:10.536 ' 00:05:10.536 03:56:03 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:10.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.536 --rc genhtml_branch_coverage=1 00:05:10.536 --rc genhtml_function_coverage=1 00:05:10.536 --rc genhtml_legend=1 00:05:10.536 --rc geninfo_all_blocks=1 00:05:10.536 --rc geninfo_unexecuted_blocks=1 00:05:10.536 00:05:10.536 ' 00:05:10.536 03:56:03 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:10.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.536 --rc genhtml_branch_coverage=1 00:05:10.536 --rc genhtml_function_coverage=1 00:05:10.536 --rc genhtml_legend=1 00:05:10.536 --rc geninfo_all_blocks=1 00:05:10.536 --rc geninfo_unexecuted_blocks=1 00:05:10.536 00:05:10.536 ' 00:05:10.536 03:56:03 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:10.536 03:56:03 -- nvmf/common.sh@7 -- # uname -s 00:05:10.536 03:56:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.536 03:56:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.536 03:56:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.536 03:56:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.536 03:56:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.536 03:56:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.536 03:56:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.536 03:56:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.536 03:56:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.536 03:56:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.536 03:56:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:51869f7d-801f-4513-a29b-682aef1a1ed9 00:05:10.536 03:56:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=51869f7d-801f-4513-a29b-682aef1a1ed9 00:05:10.536 03:56:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.536 03:56:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.536 03:56:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:10.536 03:56:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.536 03:56:03 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:10.536 03:56:03 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:10.536 03:56:03 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.536 03:56:03 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.536 03:56:03 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.536 03:56:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.536 03:56:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.536 03:56:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.536 03:56:03 -- paths/export.sh@5 -- # export PATH 00:05:10.536 03:56:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.536 03:56:03 -- nvmf/common.sh@51 -- # : 0 00:05:10.536 03:56:03 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:10.536 03:56:03 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:10.536 03:56:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.536 03:56:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.536 03:56:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.536 03:56:03 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:10.536 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:10.536 03:56:03 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:10.536 03:56:03 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:10.536 03:56:03 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:10.536 03:56:03 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:10.536 03:56:03 -- spdk/autotest.sh@32 -- # uname -s 00:05:10.536 03:56:03 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:10.536 03:56:03 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:10.536 03:56:03 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:10.536 03:56:03 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:10.536 03:56:03 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:10.536 03:56:03 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:10.536 03:56:03 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:10.536 03:56:03 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:10.536 03:56:03 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:10.536 03:56:03 -- spdk/autotest.sh@48 -- # udevadm_pid=54563 00:05:10.536 03:56:03 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:10.536 03:56:03 -- pm/common@17 -- # local monitor 00:05:10.536 03:56:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.536 03:56:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.536 03:56:03 -- pm/common@25 -- # sleep 1 00:05:10.536 03:56:03 -- pm/common@21 -- # date +%s 00:05:10.536 03:56:03 -- pm/common@21 -- # date +%s 00:05:10.536 03:56:03 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733457363 00:05:10.536 03:56:03 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733457363 00:05:10.536 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733457363_collect-cpu-load.pm.log 00:05:10.536 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733457363_collect-vmstat.pm.log 00:05:11.513 03:56:04 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:11.514 03:56:04 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:11.514 03:56:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.514 03:56:04 -- common/autotest_common.sh@10 -- # set +x 00:05:11.514 03:56:04 -- spdk/autotest.sh@59 -- # create_test_list 00:05:11.514 03:56:04 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:11.514 03:56:04 -- common/autotest_common.sh@10 -- # set +x 00:05:11.514 03:56:04 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:11.514 03:56:04 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:11.514 03:56:04 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:11.514 03:56:04 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:11.514 03:56:04 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:11.514 03:56:04 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:11.514 03:56:04 -- common/autotest_common.sh@1457 -- # uname 00:05:11.514 03:56:04 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:11.514 03:56:04 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:11.514 03:56:04 -- common/autotest_common.sh@1477 -- # uname 00:05:11.514 03:56:04 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:11.514 03:56:04 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:11.514 03:56:04 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:11.780 lcov: LCOV version 1.15 00:05:11.780 03:56:04 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:26.656 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:26.656 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:41.610 03:56:34 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:41.610 03:56:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.610 03:56:34 -- common/autotest_common.sh@10 -- # set +x 00:05:41.610 03:56:34 -- spdk/autotest.sh@78 -- # rm -f 00:05:41.610 03:56:34 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:41.871 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:41.871 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:41.871 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:41.871 03:56:35 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:41.871 03:56:35 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:41.871 03:56:35 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:41.871 03:56:35 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:42.133 03:56:35 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:42.133 03:56:35 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:42.133 03:56:35 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:42.133 03:56:35 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:05:42.133 03:56:35 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:42.133 03:56:35 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:42.133 03:56:35 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:42.133 03:56:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:42.133 03:56:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:42.133 03:56:35 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:42.133 03:56:35 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:05:42.133 03:56:35 -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:05:42.133 03:56:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:42.133 03:56:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:42.133 03:56:35 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:42.133 03:56:35 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:05:42.133 03:56:35 -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:05:42.133 03:56:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:42.133 03:56:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:42.133 03:56:35 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:42.133 03:56:35 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:05:42.133 03:56:35 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:42.133 03:56:35 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:42.133 03:56:35 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:42.133 03:56:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:42.133 03:56:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:42.133 03:56:35 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:42.133 03:56:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:42.133 03:56:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:42.133 03:56:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:42.133 03:56:35 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:42.133 03:56:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:42.133 No valid GPT data, bailing 00:05:42.133 03:56:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:42.133 03:56:35 -- scripts/common.sh@394 -- # pt= 00:05:42.133 03:56:35 -- scripts/common.sh@395 -- # return 1 00:05:42.133 03:56:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:42.133 1+0 records in 00:05:42.133 1+0 records out 00:05:42.133 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00645648 s, 162 MB/s 00:05:42.133 03:56:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:42.133 03:56:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:42.133 03:56:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 00:05:42.133 03:56:35 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 00:05:42.133 03:56:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:05:42.133 No valid GPT data, bailing 00:05:42.133 03:56:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:42.133 03:56:35 -- scripts/common.sh@394 -- # pt= 00:05:42.133 03:56:35 -- scripts/common.sh@395 -- # return 1 00:05:42.133 03:56:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:05:42.133 1+0 records in 00:05:42.133 1+0 records out 00:05:42.133 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00453841 s, 231 MB/s 00:05:42.133 03:56:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:42.133 03:56:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:42.133 03:56:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 00:05:42.133 03:56:35 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 00:05:42.133 03:56:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:05:42.133 No valid GPT data, bailing 00:05:42.133 03:56:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:42.133 03:56:35 -- scripts/common.sh@394 -- # pt= 00:05:42.133 03:56:35 -- scripts/common.sh@395 -- # return 1 00:05:42.133 03:56:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:05:42.392 1+0 records in 00:05:42.392 1+0 records out 00:05:42.392 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00628003 s, 167 MB/s 00:05:42.392 03:56:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:42.392 03:56:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:42.392 03:56:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:42.392 03:56:35 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:42.392 03:56:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:42.392 No valid GPT data, bailing 00:05:42.392 03:56:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:42.392 03:56:35 -- scripts/common.sh@394 -- # pt= 00:05:42.392 03:56:35 -- scripts/common.sh@395 -- # return 1 00:05:42.392 03:56:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:42.392 1+0 records in 00:05:42.392 1+0 records out 00:05:42.392 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00593806 s, 177 MB/s 00:05:42.392 03:56:35 -- spdk/autotest.sh@105 -- # sync 00:05:42.392 03:56:35 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:42.392 03:56:35 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:42.392 03:56:35 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:45.681 03:56:38 -- spdk/autotest.sh@111 -- # uname -s 00:05:45.681 03:56:38 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:45.681 03:56:38 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:45.681 03:56:38 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:45.940 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:45.940 Hugepages 00:05:45.940 node hugesize free / total 00:05:45.940 node0 1048576kB 0 / 0 00:05:45.940 node0 2048kB 0 / 0 00:05:45.940 00:05:45.940 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:46.200 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:46.200 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:46.460 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:46.460 03:56:39 -- spdk/autotest.sh@117 -- # uname -s 00:05:46.460 03:56:39 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:46.460 03:56:39 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:46.460 03:56:39 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:47.029 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:47.289 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.289 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.289 03:56:40 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:48.227 03:56:41 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:48.227 03:56:41 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:48.227 03:56:41 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:48.227 03:56:41 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:48.227 03:56:41 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:48.227 03:56:41 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:48.227 03:56:41 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:48.487 03:56:41 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:48.487 03:56:41 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:48.487 03:56:41 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:48.487 03:56:41 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:48.487 03:56:41 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:48.747 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:49.007 Waiting for block devices as requested 00:05:49.007 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:49.007 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:49.007 03:56:42 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:49.007 03:56:42 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:49.007 03:56:42 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:49.007 03:56:42 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:49.007 03:56:42 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:49.007 03:56:42 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:49.007 03:56:42 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:49.007 03:56:42 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:49.007 03:56:42 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:49.007 03:56:42 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:49.007 03:56:42 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:49.007 03:56:42 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:49.007 03:56:42 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:49.268 03:56:42 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:49.268 03:56:42 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:49.268 03:56:42 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:49.268 03:56:42 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:49.268 03:56:42 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:49.268 03:56:42 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:49.268 03:56:42 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:49.268 03:56:42 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:49.268 03:56:42 -- common/autotest_common.sh@1543 -- # continue 00:05:49.268 03:56:42 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:49.268 03:56:42 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:49.268 03:56:42 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:49.268 03:56:42 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:49.268 03:56:42 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:49.268 03:56:42 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:49.268 03:56:42 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:49.268 03:56:42 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:49.268 03:56:42 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:49.268 03:56:42 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:49.268 03:56:42 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:49.268 03:56:42 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:49.268 03:56:42 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:49.268 03:56:42 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:49.268 03:56:42 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:49.268 03:56:42 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:49.268 03:56:42 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:49.268 03:56:42 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:49.268 03:56:42 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:49.268 03:56:42 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:49.268 03:56:42 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:49.268 03:56:42 -- common/autotest_common.sh@1543 -- # continue 00:05:49.268 03:56:42 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:49.268 03:56:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:49.268 03:56:42 -- common/autotest_common.sh@10 -- # set +x 00:05:49.268 03:56:42 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:49.268 03:56:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:49.268 03:56:42 -- common/autotest_common.sh@10 -- # set +x 00:05:49.268 03:56:42 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:50.208 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:50.208 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:50.208 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:50.208 03:56:43 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:50.208 03:56:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:50.208 03:56:43 -- common/autotest_common.sh@10 -- # set +x 00:05:50.208 03:56:43 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:50.208 03:56:43 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:50.208 03:56:43 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:50.208 03:56:43 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:50.208 03:56:43 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:50.208 03:56:43 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:50.208 03:56:43 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:50.208 03:56:43 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:50.208 03:56:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:50.208 03:56:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:50.208 03:56:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:50.208 03:56:43 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:50.208 03:56:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:50.469 03:56:43 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:50.469 03:56:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:50.469 03:56:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:50.469 03:56:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:50.469 03:56:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:50.469 03:56:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:50.469 03:56:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:50.469 03:56:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:50.469 03:56:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:50.469 03:56:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:50.469 03:56:43 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:50.469 03:56:43 -- common/autotest_common.sh@1572 -- # return 0 00:05:50.469 03:56:43 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:50.469 03:56:43 -- common/autotest_common.sh@1580 -- # return 0 00:05:50.469 03:56:43 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:50.469 03:56:43 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:50.469 03:56:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:50.469 03:56:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:50.469 03:56:43 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:50.469 03:56:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:50.469 03:56:43 -- common/autotest_common.sh@10 -- # set +x 00:05:50.469 03:56:43 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:50.469 03:56:43 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:50.469 03:56:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.469 03:56:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.469 03:56:43 -- common/autotest_common.sh@10 -- # set +x 00:05:50.469 ************************************ 00:05:50.469 START TEST env 00:05:50.469 ************************************ 00:05:50.469 03:56:43 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:50.469 * Looking for test storage... 00:05:50.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:50.469 03:56:43 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:50.469 03:56:43 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:50.469 03:56:43 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:50.469 03:56:43 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:50.469 03:56:43 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.469 03:56:43 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.469 03:56:43 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.469 03:56:43 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.469 03:56:43 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.469 03:56:43 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.469 03:56:43 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.469 03:56:43 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.469 03:56:43 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.469 03:56:43 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.469 03:56:43 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.469 03:56:43 env -- scripts/common.sh@344 -- # case "$op" in 00:05:50.469 03:56:43 env -- scripts/common.sh@345 -- # : 1 00:05:50.469 03:56:43 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.469 03:56:43 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.730 03:56:43 env -- scripts/common.sh@365 -- # decimal 1 00:05:50.730 03:56:43 env -- scripts/common.sh@353 -- # local d=1 00:05:50.730 03:56:43 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.730 03:56:43 env -- scripts/common.sh@355 -- # echo 1 00:05:50.730 03:56:43 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.730 03:56:43 env -- scripts/common.sh@366 -- # decimal 2 00:05:50.730 03:56:43 env -- scripts/common.sh@353 -- # local d=2 00:05:50.730 03:56:43 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.730 03:56:43 env -- scripts/common.sh@355 -- # echo 2 00:05:50.730 03:56:43 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.730 03:56:43 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.730 03:56:43 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.730 03:56:43 env -- scripts/common.sh@368 -- # return 0 00:05:50.730 03:56:43 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.730 03:56:43 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:50.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.730 --rc genhtml_branch_coverage=1 00:05:50.730 --rc genhtml_function_coverage=1 00:05:50.730 --rc genhtml_legend=1 00:05:50.730 --rc geninfo_all_blocks=1 00:05:50.730 --rc geninfo_unexecuted_blocks=1 00:05:50.730 00:05:50.730 ' 00:05:50.730 03:56:43 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:50.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.730 --rc genhtml_branch_coverage=1 00:05:50.730 --rc genhtml_function_coverage=1 00:05:50.730 --rc genhtml_legend=1 00:05:50.730 --rc geninfo_all_blocks=1 00:05:50.730 --rc geninfo_unexecuted_blocks=1 00:05:50.730 00:05:50.730 ' 00:05:50.730 03:56:43 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:50.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.730 --rc genhtml_branch_coverage=1 00:05:50.730 --rc genhtml_function_coverage=1 00:05:50.730 --rc genhtml_legend=1 00:05:50.730 --rc geninfo_all_blocks=1 00:05:50.730 --rc geninfo_unexecuted_blocks=1 00:05:50.730 00:05:50.730 ' 00:05:50.730 03:56:43 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:50.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.730 --rc genhtml_branch_coverage=1 00:05:50.730 --rc genhtml_function_coverage=1 00:05:50.730 --rc genhtml_legend=1 00:05:50.730 --rc geninfo_all_blocks=1 00:05:50.730 --rc geninfo_unexecuted_blocks=1 00:05:50.730 00:05:50.730 ' 00:05:50.730 03:56:43 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:50.730 03:56:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.730 03:56:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.730 03:56:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.730 ************************************ 00:05:50.730 START TEST env_memory 00:05:50.730 ************************************ 00:05:50.730 03:56:43 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:50.730 00:05:50.730 00:05:50.730 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.730 http://cunit.sourceforge.net/ 00:05:50.730 00:05:50.730 00:05:50.730 Suite: memory 00:05:50.730 Test: alloc and free memory map ...[2024-12-06 03:56:43.917331] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:50.730 passed 00:05:50.730 Test: mem map translation ...[2024-12-06 03:56:43.960253] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:50.730 [2024-12-06 03:56:43.960343] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:50.730 [2024-12-06 03:56:43.960443] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:50.730 [2024-12-06 03:56:43.960503] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:50.730 passed 00:05:50.730 Test: mem map registration ...[2024-12-06 03:56:44.025840] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:50.730 [2024-12-06 03:56:44.025935] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:50.730 passed 00:05:50.991 Test: mem map adjacent registrations ...passed 00:05:50.991 00:05:50.991 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.991 suites 1 1 n/a 0 0 00:05:50.991 tests 4 4 4 0 0 00:05:50.991 asserts 152 152 152 0 n/a 00:05:50.991 00:05:50.991 Elapsed time = 0.237 seconds 00:05:50.991 00:05:50.991 real 0m0.285s 00:05:50.991 user 0m0.246s 00:05:50.991 sys 0m0.029s 00:05:50.991 03:56:44 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.991 03:56:44 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:50.991 ************************************ 00:05:50.991 END TEST env_memory 00:05:50.991 ************************************ 00:05:50.991 03:56:44 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:50.991 03:56:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.991 03:56:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.991 03:56:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.991 ************************************ 00:05:50.991 START TEST env_vtophys 00:05:50.991 ************************************ 00:05:50.991 03:56:44 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:50.991 EAL: lib.eal log level changed from notice to debug 00:05:50.991 EAL: Detected lcore 0 as core 0 on socket 0 00:05:50.991 EAL: Detected lcore 1 as core 0 on socket 0 00:05:50.991 EAL: Detected lcore 2 as core 0 on socket 0 00:05:50.991 EAL: Detected lcore 3 as core 0 on socket 0 00:05:50.991 EAL: Detected lcore 4 as core 0 on socket 0 00:05:50.991 EAL: Detected lcore 5 as core 0 on socket 0 00:05:50.991 EAL: Detected lcore 6 as core 0 on socket 0 00:05:50.991 EAL: Detected lcore 7 as core 0 on socket 0 00:05:50.991 EAL: Detected lcore 8 as core 0 on socket 0 00:05:50.991 EAL: Detected lcore 9 as core 0 on socket 0 00:05:50.991 EAL: Maximum logical cores by configuration: 128 00:05:50.991 EAL: Detected CPU lcores: 10 00:05:50.991 EAL: Detected NUMA nodes: 1 00:05:50.991 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:50.991 EAL: Detected shared linkage of DPDK 00:05:50.991 EAL: No shared files mode enabled, IPC will be disabled 00:05:50.991 EAL: Selected IOVA mode 'PA' 00:05:50.991 EAL: Probing VFIO support... 00:05:50.991 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:50.991 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:50.991 EAL: Ask a virtual area of 0x2e000 bytes 00:05:50.991 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:50.991 EAL: Setting up physically contiguous memory... 00:05:50.991 EAL: Setting maximum number of open files to 524288 00:05:50.991 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:50.991 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:50.991 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.991 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:50.991 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.991 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.991 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:50.991 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:50.991 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.991 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:50.991 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.991 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.991 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:50.991 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:50.991 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.991 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:50.991 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.991 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.991 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:50.991 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:50.991 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.991 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:50.991 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.991 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.991 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:50.991 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:50.991 EAL: Hugepages will be freed exactly as allocated. 00:05:50.991 EAL: No shared files mode enabled, IPC is disabled 00:05:50.991 EAL: No shared files mode enabled, IPC is disabled 00:05:51.252 EAL: TSC frequency is ~2290000 KHz 00:05:51.252 EAL: Main lcore 0 is ready (tid=7f6043930a40;cpuset=[0]) 00:05:51.252 EAL: Trying to obtain current memory policy. 00:05:51.252 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.252 EAL: Restoring previous memory policy: 0 00:05:51.252 EAL: request: mp_malloc_sync 00:05:51.252 EAL: No shared files mode enabled, IPC is disabled 00:05:51.252 EAL: Heap on socket 0 was expanded by 2MB 00:05:51.252 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:51.252 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:51.252 EAL: Mem event callback 'spdk:(nil)' registered 00:05:51.252 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:51.252 00:05:51.252 00:05:51.252 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.252 http://cunit.sourceforge.net/ 00:05:51.252 00:05:51.252 00:05:51.252 Suite: components_suite 00:05:51.522 Test: vtophys_malloc_test ...passed 00:05:51.522 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:51.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.522 EAL: Restoring previous memory policy: 4 00:05:51.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.522 EAL: request: mp_malloc_sync 00:05:51.522 EAL: No shared files mode enabled, IPC is disabled 00:05:51.522 EAL: Heap on socket 0 was expanded by 4MB 00:05:51.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.522 EAL: request: mp_malloc_sync 00:05:51.522 EAL: No shared files mode enabled, IPC is disabled 00:05:51.522 EAL: Heap on socket 0 was shrunk by 4MB 00:05:51.522 EAL: Trying to obtain current memory policy. 00:05:51.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.522 EAL: Restoring previous memory policy: 4 00:05:51.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.522 EAL: request: mp_malloc_sync 00:05:51.522 EAL: No shared files mode enabled, IPC is disabled 00:05:51.522 EAL: Heap on socket 0 was expanded by 6MB 00:05:51.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.522 EAL: request: mp_malloc_sync 00:05:51.522 EAL: No shared files mode enabled, IPC is disabled 00:05:51.522 EAL: Heap on socket 0 was shrunk by 6MB 00:05:51.522 EAL: Trying to obtain current memory policy. 00:05:51.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.522 EAL: Restoring previous memory policy: 4 00:05:51.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.522 EAL: request: mp_malloc_sync 00:05:51.522 EAL: No shared files mode enabled, IPC is disabled 00:05:51.522 EAL: Heap on socket 0 was expanded by 10MB 00:05:51.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.522 EAL: request: mp_malloc_sync 00:05:51.522 EAL: No shared files mode enabled, IPC is disabled 00:05:51.522 EAL: Heap on socket 0 was shrunk by 10MB 00:05:51.522 EAL: Trying to obtain current memory policy. 00:05:51.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.522 EAL: Restoring previous memory policy: 4 00:05:51.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.522 EAL: request: mp_malloc_sync 00:05:51.522 EAL: No shared files mode enabled, IPC is disabled 00:05:51.522 EAL: Heap on socket 0 was expanded by 18MB 00:05:51.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.522 EAL: request: mp_malloc_sync 00:05:51.522 EAL: No shared files mode enabled, IPC is disabled 00:05:51.522 EAL: Heap on socket 0 was shrunk by 18MB 00:05:51.797 EAL: Trying to obtain current memory policy. 00:05:51.797 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.797 EAL: Restoring previous memory policy: 4 00:05:51.797 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.797 EAL: request: mp_malloc_sync 00:05:51.797 EAL: No shared files mode enabled, IPC is disabled 00:05:51.797 EAL: Heap on socket 0 was expanded by 34MB 00:05:51.797 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.797 EAL: request: mp_malloc_sync 00:05:51.797 EAL: No shared files mode enabled, IPC is disabled 00:05:51.797 EAL: Heap on socket 0 was shrunk by 34MB 00:05:51.797 EAL: Trying to obtain current memory policy. 00:05:51.797 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.797 EAL: Restoring previous memory policy: 4 00:05:51.797 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.797 EAL: request: mp_malloc_sync 00:05:51.797 EAL: No shared files mode enabled, IPC is disabled 00:05:51.797 EAL: Heap on socket 0 was expanded by 66MB 00:05:51.797 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.797 EAL: request: mp_malloc_sync 00:05:51.797 EAL: No shared files mode enabled, IPC is disabled 00:05:51.797 EAL: Heap on socket 0 was shrunk by 66MB 00:05:52.057 EAL: Trying to obtain current memory policy. 00:05:52.057 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.057 EAL: Restoring previous memory policy: 4 00:05:52.057 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.057 EAL: request: mp_malloc_sync 00:05:52.057 EAL: No shared files mode enabled, IPC is disabled 00:05:52.057 EAL: Heap on socket 0 was expanded by 130MB 00:05:52.317 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.317 EAL: request: mp_malloc_sync 00:05:52.317 EAL: No shared files mode enabled, IPC is disabled 00:05:52.317 EAL: Heap on socket 0 was shrunk by 130MB 00:05:52.576 EAL: Trying to obtain current memory policy. 00:05:52.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.576 EAL: Restoring previous memory policy: 4 00:05:52.576 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.576 EAL: request: mp_malloc_sync 00:05:52.576 EAL: No shared files mode enabled, IPC is disabled 00:05:52.576 EAL: Heap on socket 0 was expanded by 258MB 00:05:53.145 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.145 EAL: request: mp_malloc_sync 00:05:53.145 EAL: No shared files mode enabled, IPC is disabled 00:05:53.145 EAL: Heap on socket 0 was shrunk by 258MB 00:05:53.405 EAL: Trying to obtain current memory policy. 00:05:53.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.665 EAL: Restoring previous memory policy: 4 00:05:53.665 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.665 EAL: request: mp_malloc_sync 00:05:53.665 EAL: No shared files mode enabled, IPC is disabled 00:05:53.665 EAL: Heap on socket 0 was expanded by 514MB 00:05:54.604 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.604 EAL: request: mp_malloc_sync 00:05:54.604 EAL: No shared files mode enabled, IPC is disabled 00:05:54.604 EAL: Heap on socket 0 was shrunk by 514MB 00:05:55.539 EAL: Trying to obtain current memory policy. 00:05:55.539 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.806 EAL: Restoring previous memory policy: 4 00:05:55.806 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.806 EAL: request: mp_malloc_sync 00:05:55.806 EAL: No shared files mode enabled, IPC is disabled 00:05:55.806 EAL: Heap on socket 0 was expanded by 1026MB 00:05:57.714 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.714 EAL: request: mp_malloc_sync 00:05:57.714 EAL: No shared files mode enabled, IPC is disabled 00:05:57.714 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:59.622 passed 00:05:59.622 00:05:59.622 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.622 suites 1 1 n/a 0 0 00:05:59.622 tests 2 2 2 0 0 00:05:59.622 asserts 5775 5775 5775 0 n/a 00:05:59.622 00:05:59.622 Elapsed time = 8.161 seconds 00:05:59.622 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.622 EAL: request: mp_malloc_sync 00:05:59.622 EAL: No shared files mode enabled, IPC is disabled 00:05:59.622 EAL: Heap on socket 0 was shrunk by 2MB 00:05:59.622 EAL: No shared files mode enabled, IPC is disabled 00:05:59.622 EAL: No shared files mode enabled, IPC is disabled 00:05:59.622 EAL: No shared files mode enabled, IPC is disabled 00:05:59.622 00:05:59.622 real 0m8.491s 00:05:59.622 user 0m7.543s 00:05:59.622 sys 0m0.791s 00:05:59.622 ************************************ 00:05:59.622 END TEST env_vtophys 00:05:59.622 ************************************ 00:05:59.622 03:56:52 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.622 03:56:52 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:59.622 03:56:52 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:59.622 03:56:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.622 03:56:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.622 03:56:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.622 ************************************ 00:05:59.622 START TEST env_pci 00:05:59.622 ************************************ 00:05:59.622 03:56:52 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:59.622 00:05:59.622 00:05:59.622 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.622 http://cunit.sourceforge.net/ 00:05:59.622 00:05:59.622 00:05:59.622 Suite: pci 00:05:59.622 Test: pci_hook ...[2024-12-06 03:56:52.801491] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56879 has claimed it 00:05:59.623 EAL: Cannot find device (10000:00:01.0) 00:05:59.623 passed 00:05:59.623 00:05:59.623 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.623 suites 1 1 n/a 0 0 00:05:59.623 tests 1 1 1 0 0 00:05:59.623 asserts 25 25 25 0 n/a 00:05:59.623 00:05:59.623 Elapsed time = 0.009 seconds 00:05:59.623 EAL: Failed to attach device on primary process 00:05:59.623 00:05:59.623 real 0m0.106s 00:05:59.623 user 0m0.049s 00:05:59.623 sys 0m0.056s 00:05:59.623 ************************************ 00:05:59.623 END TEST env_pci 00:05:59.623 ************************************ 00:05:59.623 03:56:52 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.623 03:56:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:59.623 03:56:52 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:59.623 03:56:52 env -- env/env.sh@15 -- # uname 00:05:59.623 03:56:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:59.623 03:56:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:59.623 03:56:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:59.623 03:56:52 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:59.623 03:56:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.623 03:56:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.623 ************************************ 00:05:59.623 START TEST env_dpdk_post_init 00:05:59.623 ************************************ 00:05:59.623 03:56:52 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:59.881 EAL: Detected CPU lcores: 10 00:05:59.881 EAL: Detected NUMA nodes: 1 00:05:59.881 EAL: Detected shared linkage of DPDK 00:05:59.881 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:59.881 EAL: Selected IOVA mode 'PA' 00:05:59.881 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:59.881 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:59.881 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:59.881 Starting DPDK initialization... 00:05:59.881 Starting SPDK post initialization... 00:05:59.881 SPDK NVMe probe 00:05:59.881 Attaching to 0000:00:10.0 00:05:59.881 Attaching to 0000:00:11.0 00:05:59.881 Attached to 0000:00:10.0 00:05:59.881 Attached to 0000:00:11.0 00:05:59.881 Cleaning up... 00:05:59.881 00:05:59.881 real 0m0.274s 00:05:59.881 user 0m0.093s 00:05:59.881 sys 0m0.082s 00:05:59.881 03:56:53 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.881 03:56:53 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:59.881 ************************************ 00:05:59.881 END TEST env_dpdk_post_init 00:05:59.881 ************************************ 00:06:00.140 03:56:53 env -- env/env.sh@26 -- # uname 00:06:00.140 03:56:53 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:00.140 03:56:53 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:00.140 03:56:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.140 03:56:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.140 03:56:53 env -- common/autotest_common.sh@10 -- # set +x 00:06:00.140 ************************************ 00:06:00.140 START TEST env_mem_callbacks 00:06:00.140 ************************************ 00:06:00.140 03:56:53 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:00.140 EAL: Detected CPU lcores: 10 00:06:00.140 EAL: Detected NUMA nodes: 1 00:06:00.140 EAL: Detected shared linkage of DPDK 00:06:00.140 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:00.140 EAL: Selected IOVA mode 'PA' 00:06:00.140 00:06:00.140 00:06:00.140 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.140 http://cunit.sourceforge.net/ 00:06:00.140 00:06:00.140 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:00.140 00:06:00.140 Suite: memory 00:06:00.140 Test: test ... 00:06:00.140 register 0x200000200000 2097152 00:06:00.140 malloc 3145728 00:06:00.140 register 0x200000400000 4194304 00:06:00.140 buf 0x2000004fffc0 len 3145728 PASSED 00:06:00.140 malloc 64 00:06:00.140 buf 0x2000004ffec0 len 64 PASSED 00:06:00.140 malloc 4194304 00:06:00.140 register 0x200000800000 6291456 00:06:00.140 buf 0x2000009fffc0 len 4194304 PASSED 00:06:00.140 free 0x2000004fffc0 3145728 00:06:00.400 free 0x2000004ffec0 64 00:06:00.400 unregister 0x200000400000 4194304 PASSED 00:06:00.400 free 0x2000009fffc0 4194304 00:06:00.400 unregister 0x200000800000 6291456 PASSED 00:06:00.400 malloc 8388608 00:06:00.400 register 0x200000400000 10485760 00:06:00.400 buf 0x2000005fffc0 len 8388608 PASSED 00:06:00.400 free 0x2000005fffc0 8388608 00:06:00.400 unregister 0x200000400000 10485760 PASSED 00:06:00.400 passed 00:06:00.400 00:06:00.400 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.400 suites 1 1 n/a 0 0 00:06:00.400 tests 1 1 1 0 0 00:06:00.400 asserts 15 15 15 0 n/a 00:06:00.400 00:06:00.400 Elapsed time = 0.084 seconds 00:06:00.400 00:06:00.400 real 0m0.284s 00:06:00.400 user 0m0.110s 00:06:00.400 sys 0m0.071s 00:06:00.400 03:56:53 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.400 03:56:53 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:00.400 ************************************ 00:06:00.400 END TEST env_mem_callbacks 00:06:00.400 ************************************ 00:06:00.400 00:06:00.400 real 0m10.019s 00:06:00.400 user 0m8.282s 00:06:00.400 sys 0m1.375s 00:06:00.400 03:56:53 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.400 03:56:53 env -- common/autotest_common.sh@10 -- # set +x 00:06:00.400 ************************************ 00:06:00.400 END TEST env 00:06:00.400 ************************************ 00:06:00.400 03:56:53 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:00.400 03:56:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.400 03:56:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.400 03:56:53 -- common/autotest_common.sh@10 -- # set +x 00:06:00.400 ************************************ 00:06:00.400 START TEST rpc 00:06:00.400 ************************************ 00:06:00.400 03:56:53 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:00.659 * Looking for test storage... 00:06:00.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:00.659 03:56:53 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:00.659 03:56:53 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:00.659 03:56:53 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:00.659 03:56:53 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:00.659 03:56:53 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.659 03:56:53 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.659 03:56:53 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.659 03:56:53 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.659 03:56:53 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.659 03:56:53 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.659 03:56:53 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.659 03:56:53 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.659 03:56:53 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.659 03:56:53 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.659 03:56:53 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.659 03:56:53 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:00.659 03:56:53 rpc -- scripts/common.sh@345 -- # : 1 00:06:00.659 03:56:53 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.659 03:56:53 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.659 03:56:53 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:00.659 03:56:53 rpc -- scripts/common.sh@353 -- # local d=1 00:06:00.659 03:56:53 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.659 03:56:53 rpc -- scripts/common.sh@355 -- # echo 1 00:06:00.659 03:56:53 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.659 03:56:53 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:00.659 03:56:53 rpc -- scripts/common.sh@353 -- # local d=2 00:06:00.659 03:56:53 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.659 03:56:53 rpc -- scripts/common.sh@355 -- # echo 2 00:06:00.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.659 03:56:53 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.659 03:56:53 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.659 03:56:53 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.659 03:56:53 rpc -- scripts/common.sh@368 -- # return 0 00:06:00.659 03:56:53 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.659 03:56:53 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:00.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.659 --rc genhtml_branch_coverage=1 00:06:00.659 --rc genhtml_function_coverage=1 00:06:00.659 --rc genhtml_legend=1 00:06:00.659 --rc geninfo_all_blocks=1 00:06:00.659 --rc geninfo_unexecuted_blocks=1 00:06:00.659 00:06:00.659 ' 00:06:00.659 03:56:53 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:00.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.659 --rc genhtml_branch_coverage=1 00:06:00.659 --rc genhtml_function_coverage=1 00:06:00.659 --rc genhtml_legend=1 00:06:00.659 --rc geninfo_all_blocks=1 00:06:00.659 --rc geninfo_unexecuted_blocks=1 00:06:00.659 00:06:00.659 ' 00:06:00.660 03:56:53 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:00.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.660 --rc genhtml_branch_coverage=1 00:06:00.660 --rc genhtml_function_coverage=1 00:06:00.660 --rc genhtml_legend=1 00:06:00.660 --rc geninfo_all_blocks=1 00:06:00.660 --rc geninfo_unexecuted_blocks=1 00:06:00.660 00:06:00.660 ' 00:06:00.660 03:56:53 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:00.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.660 --rc genhtml_branch_coverage=1 00:06:00.660 --rc genhtml_function_coverage=1 00:06:00.660 --rc genhtml_legend=1 00:06:00.660 --rc geninfo_all_blocks=1 00:06:00.660 --rc geninfo_unexecuted_blocks=1 00:06:00.660 00:06:00.660 ' 00:06:00.660 03:56:53 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57011 00:06:00.660 03:56:53 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.660 03:56:53 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:00.660 03:56:53 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57011 00:06:00.660 03:56:53 rpc -- common/autotest_common.sh@835 -- # '[' -z 57011 ']' 00:06:00.660 03:56:53 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.660 03:56:53 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.660 03:56:53 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.660 03:56:53 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.660 03:56:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.660 [2024-12-06 03:56:53.992991] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:06:00.660 [2024-12-06 03:56:53.993206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57011 ] 00:06:00.919 [2024-12-06 03:56:54.171273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.178 [2024-12-06 03:56:54.296308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:01.178 [2024-12-06 03:56:54.296460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57011' to capture a snapshot of events at runtime. 00:06:01.178 [2024-12-06 03:56:54.296502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:01.178 [2024-12-06 03:56:54.296536] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:01.178 [2024-12-06 03:56:54.296574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57011 for offline analysis/debug. 00:06:01.178 [2024-12-06 03:56:54.297861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.134 03:56:55 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.134 03:56:55 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:02.134 03:56:55 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:02.134 03:56:55 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:02.134 03:56:55 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:02.134 03:56:55 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:02.134 03:56:55 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.134 03:56:55 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.134 03:56:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.134 ************************************ 00:06:02.134 START TEST rpc_integrity 00:06:02.134 ************************************ 00:06:02.134 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:02.134 03:56:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:02.134 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.134 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.134 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.134 03:56:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:02.134 03:56:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:02.134 03:56:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:02.134 03:56:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:02.134 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.134 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.134 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.134 03:56:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:02.134 03:56:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:02.134 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.134 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.134 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.134 03:56:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:02.134 { 00:06:02.134 "name": "Malloc0", 00:06:02.134 "aliases": [ 00:06:02.134 "7fb4910e-d120-492f-ac2b-9ba69c57694c" 00:06:02.134 ], 00:06:02.134 "product_name": "Malloc disk", 00:06:02.134 "block_size": 512, 00:06:02.134 "num_blocks": 16384, 00:06:02.134 "uuid": "7fb4910e-d120-492f-ac2b-9ba69c57694c", 00:06:02.134 "assigned_rate_limits": { 00:06:02.134 "rw_ios_per_sec": 0, 00:06:02.134 "rw_mbytes_per_sec": 0, 00:06:02.134 "r_mbytes_per_sec": 0, 00:06:02.134 "w_mbytes_per_sec": 0 00:06:02.134 }, 00:06:02.134 "claimed": false, 00:06:02.134 "zoned": false, 00:06:02.134 "supported_io_types": { 00:06:02.134 "read": true, 00:06:02.134 "write": true, 00:06:02.134 "unmap": true, 00:06:02.134 "flush": true, 00:06:02.134 "reset": true, 00:06:02.134 "nvme_admin": false, 00:06:02.134 "nvme_io": false, 00:06:02.134 "nvme_io_md": false, 00:06:02.134 "write_zeroes": true, 00:06:02.134 "zcopy": true, 00:06:02.134 "get_zone_info": false, 00:06:02.134 "zone_management": false, 00:06:02.134 "zone_append": false, 00:06:02.134 "compare": false, 00:06:02.134 "compare_and_write": false, 00:06:02.134 "abort": true, 00:06:02.134 "seek_hole": false, 00:06:02.134 "seek_data": false, 00:06:02.134 "copy": true, 00:06:02.134 "nvme_iov_md": false 00:06:02.134 }, 00:06:02.134 "memory_domains": [ 00:06:02.134 { 00:06:02.134 "dma_device_id": "system", 00:06:02.134 "dma_device_type": 1 00:06:02.134 }, 00:06:02.134 { 00:06:02.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.134 "dma_device_type": 2 00:06:02.134 } 00:06:02.134 ], 00:06:02.134 "driver_specific": {} 00:06:02.134 } 00:06:02.134 ]' 00:06:02.134 03:56:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:02.134 03:56:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:02.134 03:56:55 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:02.134 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.134 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.134 [2024-12-06 03:56:55.418788] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:02.134 [2024-12-06 03:56:55.418861] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:02.134 [2024-12-06 03:56:55.418886] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:02.134 [2024-12-06 03:56:55.418901] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:02.134 [2024-12-06 03:56:55.421352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:02.134 [2024-12-06 03:56:55.421445] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:02.134 Passthru0 00:06:02.134 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.134 03:56:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:02.134 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.134 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.134 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.134 03:56:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:02.134 { 00:06:02.134 "name": "Malloc0", 00:06:02.134 "aliases": [ 00:06:02.134 "7fb4910e-d120-492f-ac2b-9ba69c57694c" 00:06:02.134 ], 00:06:02.134 "product_name": "Malloc disk", 00:06:02.134 "block_size": 512, 00:06:02.134 "num_blocks": 16384, 00:06:02.134 "uuid": "7fb4910e-d120-492f-ac2b-9ba69c57694c", 00:06:02.134 "assigned_rate_limits": { 00:06:02.134 "rw_ios_per_sec": 0, 00:06:02.134 "rw_mbytes_per_sec": 0, 00:06:02.134 "r_mbytes_per_sec": 0, 00:06:02.134 "w_mbytes_per_sec": 0 00:06:02.134 }, 00:06:02.134 "claimed": true, 00:06:02.134 "claim_type": "exclusive_write", 00:06:02.134 "zoned": false, 00:06:02.134 "supported_io_types": { 00:06:02.134 "read": true, 00:06:02.134 "write": true, 00:06:02.134 "unmap": true, 00:06:02.134 "flush": true, 00:06:02.134 "reset": true, 00:06:02.134 "nvme_admin": false, 00:06:02.134 "nvme_io": false, 00:06:02.134 "nvme_io_md": false, 00:06:02.134 "write_zeroes": true, 00:06:02.134 "zcopy": true, 00:06:02.134 "get_zone_info": false, 00:06:02.134 "zone_management": false, 00:06:02.134 "zone_append": false, 00:06:02.134 "compare": false, 00:06:02.134 "compare_and_write": false, 00:06:02.134 "abort": true, 00:06:02.134 "seek_hole": false, 00:06:02.134 "seek_data": false, 00:06:02.134 "copy": true, 00:06:02.134 "nvme_iov_md": false 00:06:02.134 }, 00:06:02.134 "memory_domains": [ 00:06:02.134 { 00:06:02.134 "dma_device_id": "system", 00:06:02.134 "dma_device_type": 1 00:06:02.134 }, 00:06:02.134 { 00:06:02.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.134 "dma_device_type": 2 00:06:02.134 } 00:06:02.134 ], 00:06:02.134 "driver_specific": {} 00:06:02.134 }, 00:06:02.134 { 00:06:02.134 "name": "Passthru0", 00:06:02.134 "aliases": [ 00:06:02.134 "5e268dd6-1ef4-52e0-829e-a39d1853e2aa" 00:06:02.134 ], 00:06:02.134 "product_name": "passthru", 00:06:02.134 "block_size": 512, 00:06:02.134 "num_blocks": 16384, 00:06:02.135 "uuid": "5e268dd6-1ef4-52e0-829e-a39d1853e2aa", 00:06:02.135 "assigned_rate_limits": { 00:06:02.135 "rw_ios_per_sec": 0, 00:06:02.135 "rw_mbytes_per_sec": 0, 00:06:02.135 "r_mbytes_per_sec": 0, 00:06:02.135 "w_mbytes_per_sec": 0 00:06:02.135 }, 00:06:02.135 "claimed": false, 00:06:02.135 "zoned": false, 00:06:02.135 "supported_io_types": { 00:06:02.135 "read": true, 00:06:02.135 "write": true, 00:06:02.135 "unmap": true, 00:06:02.135 "flush": true, 00:06:02.135 "reset": true, 00:06:02.135 "nvme_admin": false, 00:06:02.135 "nvme_io": false, 00:06:02.135 "nvme_io_md": false, 00:06:02.135 "write_zeroes": true, 00:06:02.135 "zcopy": true, 00:06:02.135 "get_zone_info": false, 00:06:02.135 "zone_management": false, 00:06:02.135 "zone_append": false, 00:06:02.135 "compare": false, 00:06:02.135 "compare_and_write": false, 00:06:02.135 "abort": true, 00:06:02.135 "seek_hole": false, 00:06:02.135 "seek_data": false, 00:06:02.135 "copy": true, 00:06:02.135 "nvme_iov_md": false 00:06:02.135 }, 00:06:02.135 "memory_domains": [ 00:06:02.135 { 00:06:02.135 "dma_device_id": "system", 00:06:02.135 "dma_device_type": 1 00:06:02.135 }, 00:06:02.135 { 00:06:02.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.135 "dma_device_type": 2 00:06:02.135 } 00:06:02.135 ], 00:06:02.135 "driver_specific": { 00:06:02.135 "passthru": { 00:06:02.135 "name": "Passthru0", 00:06:02.135 "base_bdev_name": "Malloc0" 00:06:02.135 } 00:06:02.135 } 00:06:02.135 } 00:06:02.135 ]' 00:06:02.135 03:56:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:02.392 03:56:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:02.392 03:56:55 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:02.392 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.392 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.392 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.392 03:56:55 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:02.392 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.392 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.392 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.392 03:56:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:02.392 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.392 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.392 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.392 03:56:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:02.392 03:56:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:02.392 ************************************ 00:06:02.392 END TEST rpc_integrity 00:06:02.392 ************************************ 00:06:02.392 03:56:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:02.392 00:06:02.392 real 0m0.371s 00:06:02.392 user 0m0.200s 00:06:02.392 sys 0m0.061s 00:06:02.392 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.392 03:56:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.392 03:56:55 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:02.392 03:56:55 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.392 03:56:55 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.392 03:56:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.392 ************************************ 00:06:02.392 START TEST rpc_plugins 00:06:02.392 ************************************ 00:06:02.392 03:56:55 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:02.392 03:56:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:02.392 03:56:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.392 03:56:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.392 03:56:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.392 03:56:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:02.393 03:56:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:02.393 03:56:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.393 03:56:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.393 03:56:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.393 03:56:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:02.393 { 00:06:02.393 "name": "Malloc1", 00:06:02.393 "aliases": [ 00:06:02.393 "d7d6ada8-cef4-4352-b312-d19952c82523" 00:06:02.393 ], 00:06:02.393 "product_name": "Malloc disk", 00:06:02.393 "block_size": 4096, 00:06:02.393 "num_blocks": 256, 00:06:02.393 "uuid": "d7d6ada8-cef4-4352-b312-d19952c82523", 00:06:02.393 "assigned_rate_limits": { 00:06:02.393 "rw_ios_per_sec": 0, 00:06:02.393 "rw_mbytes_per_sec": 0, 00:06:02.393 "r_mbytes_per_sec": 0, 00:06:02.393 "w_mbytes_per_sec": 0 00:06:02.393 }, 00:06:02.393 "claimed": false, 00:06:02.393 "zoned": false, 00:06:02.393 "supported_io_types": { 00:06:02.393 "read": true, 00:06:02.393 "write": true, 00:06:02.393 "unmap": true, 00:06:02.393 "flush": true, 00:06:02.393 "reset": true, 00:06:02.393 "nvme_admin": false, 00:06:02.393 "nvme_io": false, 00:06:02.393 "nvme_io_md": false, 00:06:02.393 "write_zeroes": true, 00:06:02.393 "zcopy": true, 00:06:02.393 "get_zone_info": false, 00:06:02.393 "zone_management": false, 00:06:02.393 "zone_append": false, 00:06:02.393 "compare": false, 00:06:02.393 "compare_and_write": false, 00:06:02.393 "abort": true, 00:06:02.393 "seek_hole": false, 00:06:02.393 "seek_data": false, 00:06:02.393 "copy": true, 00:06:02.393 "nvme_iov_md": false 00:06:02.393 }, 00:06:02.393 "memory_domains": [ 00:06:02.393 { 00:06:02.393 "dma_device_id": "system", 00:06:02.393 "dma_device_type": 1 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.393 "dma_device_type": 2 00:06:02.393 } 00:06:02.393 ], 00:06:02.393 "driver_specific": {} 00:06:02.393 } 00:06:02.393 ]' 00:06:02.393 03:56:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:02.652 03:56:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:02.652 03:56:55 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:02.652 03:56:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.652 03:56:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.652 03:56:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.652 03:56:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:02.652 03:56:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.652 03:56:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.652 03:56:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.652 03:56:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:02.652 03:56:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:02.652 ************************************ 00:06:02.652 END TEST rpc_plugins 00:06:02.652 ************************************ 00:06:02.652 03:56:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:02.652 00:06:02.652 real 0m0.175s 00:06:02.652 user 0m0.092s 00:06:02.652 sys 0m0.029s 00:06:02.652 03:56:55 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.652 03:56:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.652 03:56:55 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:02.652 03:56:55 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.652 03:56:55 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.652 03:56:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.652 ************************************ 00:06:02.652 START TEST rpc_trace_cmd_test 00:06:02.652 ************************************ 00:06:02.652 03:56:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:02.652 03:56:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:02.652 03:56:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:02.652 03:56:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.652 03:56:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.652 03:56:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.652 03:56:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:02.652 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57011", 00:06:02.652 "tpoint_group_mask": "0x8", 00:06:02.652 "iscsi_conn": { 00:06:02.652 "mask": "0x2", 00:06:02.652 "tpoint_mask": "0x0" 00:06:02.652 }, 00:06:02.652 "scsi": { 00:06:02.652 "mask": "0x4", 00:06:02.652 "tpoint_mask": "0x0" 00:06:02.652 }, 00:06:02.652 "bdev": { 00:06:02.652 "mask": "0x8", 00:06:02.652 "tpoint_mask": "0xffffffffffffffff" 00:06:02.652 }, 00:06:02.652 "nvmf_rdma": { 00:06:02.652 "mask": "0x10", 00:06:02.652 "tpoint_mask": "0x0" 00:06:02.652 }, 00:06:02.652 "nvmf_tcp": { 00:06:02.652 "mask": "0x20", 00:06:02.652 "tpoint_mask": "0x0" 00:06:02.652 }, 00:06:02.652 "ftl": { 00:06:02.652 "mask": "0x40", 00:06:02.652 "tpoint_mask": "0x0" 00:06:02.652 }, 00:06:02.652 "blobfs": { 00:06:02.652 "mask": "0x80", 00:06:02.652 "tpoint_mask": "0x0" 00:06:02.652 }, 00:06:02.652 "dsa": { 00:06:02.652 "mask": "0x200", 00:06:02.652 "tpoint_mask": "0x0" 00:06:02.652 }, 00:06:02.652 "thread": { 00:06:02.652 "mask": "0x400", 00:06:02.652 "tpoint_mask": "0x0" 00:06:02.652 }, 00:06:02.652 "nvme_pcie": { 00:06:02.652 "mask": "0x800", 00:06:02.652 "tpoint_mask": "0x0" 00:06:02.652 }, 00:06:02.652 "iaa": { 00:06:02.652 "mask": "0x1000", 00:06:02.652 "tpoint_mask": "0x0" 00:06:02.652 }, 00:06:02.652 "nvme_tcp": { 00:06:02.652 "mask": "0x2000", 00:06:02.652 "tpoint_mask": "0x0" 00:06:02.652 }, 00:06:02.652 "bdev_nvme": { 00:06:02.652 "mask": "0x4000", 00:06:02.652 "tpoint_mask": "0x0" 00:06:02.652 }, 00:06:02.652 "sock": { 00:06:02.652 "mask": "0x8000", 00:06:02.652 "tpoint_mask": "0x0" 00:06:02.652 }, 00:06:02.652 "blob": { 00:06:02.652 "mask": "0x10000", 00:06:02.652 "tpoint_mask": "0x0" 00:06:02.652 }, 00:06:02.652 "bdev_raid": { 00:06:02.652 "mask": "0x20000", 00:06:02.652 "tpoint_mask": "0x0" 00:06:02.652 }, 00:06:02.652 "scheduler": { 00:06:02.652 "mask": "0x40000", 00:06:02.652 "tpoint_mask": "0x0" 00:06:02.652 } 00:06:02.652 }' 00:06:02.652 03:56:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:02.652 03:56:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:02.652 03:56:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:02.911 03:56:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:02.911 03:56:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:02.911 03:56:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:02.911 03:56:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:02.911 03:56:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:02.911 03:56:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:02.911 ************************************ 00:06:02.911 END TEST rpc_trace_cmd_test 00:06:02.911 ************************************ 00:06:02.911 03:56:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:02.911 00:06:02.911 real 0m0.190s 00:06:02.911 user 0m0.148s 00:06:02.911 sys 0m0.033s 00:06:02.911 03:56:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.911 03:56:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.911 03:56:56 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:02.911 03:56:56 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:02.911 03:56:56 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:02.911 03:56:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.911 03:56:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.911 03:56:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.911 ************************************ 00:06:02.911 START TEST rpc_daemon_integrity 00:06:02.911 ************************************ 00:06:02.911 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:02.911 03:56:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:02.911 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.911 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.911 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.911 03:56:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:02.911 03:56:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:02.911 03:56:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:02.911 03:56:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:02.911 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.911 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.911 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.911 03:56:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:02.911 03:56:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:02.911 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.911 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:03.170 { 00:06:03.170 "name": "Malloc2", 00:06:03.170 "aliases": [ 00:06:03.170 "17f5cfc0-e314-425b-a15f-eb01fe8be119" 00:06:03.170 ], 00:06:03.170 "product_name": "Malloc disk", 00:06:03.170 "block_size": 512, 00:06:03.170 "num_blocks": 16384, 00:06:03.170 "uuid": "17f5cfc0-e314-425b-a15f-eb01fe8be119", 00:06:03.170 "assigned_rate_limits": { 00:06:03.170 "rw_ios_per_sec": 0, 00:06:03.170 "rw_mbytes_per_sec": 0, 00:06:03.170 "r_mbytes_per_sec": 0, 00:06:03.170 "w_mbytes_per_sec": 0 00:06:03.170 }, 00:06:03.170 "claimed": false, 00:06:03.170 "zoned": false, 00:06:03.170 "supported_io_types": { 00:06:03.170 "read": true, 00:06:03.170 "write": true, 00:06:03.170 "unmap": true, 00:06:03.170 "flush": true, 00:06:03.170 "reset": true, 00:06:03.170 "nvme_admin": false, 00:06:03.170 "nvme_io": false, 00:06:03.170 "nvme_io_md": false, 00:06:03.170 "write_zeroes": true, 00:06:03.170 "zcopy": true, 00:06:03.170 "get_zone_info": false, 00:06:03.170 "zone_management": false, 00:06:03.170 "zone_append": false, 00:06:03.170 "compare": false, 00:06:03.170 "compare_and_write": false, 00:06:03.170 "abort": true, 00:06:03.170 "seek_hole": false, 00:06:03.170 "seek_data": false, 00:06:03.170 "copy": true, 00:06:03.170 "nvme_iov_md": false 00:06:03.170 }, 00:06:03.170 "memory_domains": [ 00:06:03.170 { 00:06:03.170 "dma_device_id": "system", 00:06:03.170 "dma_device_type": 1 00:06:03.170 }, 00:06:03.170 { 00:06:03.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.170 "dma_device_type": 2 00:06:03.170 } 00:06:03.170 ], 00:06:03.170 "driver_specific": {} 00:06:03.170 } 00:06:03.170 ]' 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.170 [2024-12-06 03:56:56.334600] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:03.170 [2024-12-06 03:56:56.334726] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:03.170 [2024-12-06 03:56:56.334755] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:03.170 [2024-12-06 03:56:56.334767] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:03.170 [2024-12-06 03:56:56.337309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:03.170 [2024-12-06 03:56:56.337349] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:03.170 Passthru0 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:03.170 { 00:06:03.170 "name": "Malloc2", 00:06:03.170 "aliases": [ 00:06:03.170 "17f5cfc0-e314-425b-a15f-eb01fe8be119" 00:06:03.170 ], 00:06:03.170 "product_name": "Malloc disk", 00:06:03.170 "block_size": 512, 00:06:03.170 "num_blocks": 16384, 00:06:03.170 "uuid": "17f5cfc0-e314-425b-a15f-eb01fe8be119", 00:06:03.170 "assigned_rate_limits": { 00:06:03.170 "rw_ios_per_sec": 0, 00:06:03.170 "rw_mbytes_per_sec": 0, 00:06:03.170 "r_mbytes_per_sec": 0, 00:06:03.170 "w_mbytes_per_sec": 0 00:06:03.170 }, 00:06:03.170 "claimed": true, 00:06:03.170 "claim_type": "exclusive_write", 00:06:03.170 "zoned": false, 00:06:03.170 "supported_io_types": { 00:06:03.170 "read": true, 00:06:03.170 "write": true, 00:06:03.170 "unmap": true, 00:06:03.170 "flush": true, 00:06:03.170 "reset": true, 00:06:03.170 "nvme_admin": false, 00:06:03.170 "nvme_io": false, 00:06:03.170 "nvme_io_md": false, 00:06:03.170 "write_zeroes": true, 00:06:03.170 "zcopy": true, 00:06:03.170 "get_zone_info": false, 00:06:03.170 "zone_management": false, 00:06:03.170 "zone_append": false, 00:06:03.170 "compare": false, 00:06:03.170 "compare_and_write": false, 00:06:03.170 "abort": true, 00:06:03.170 "seek_hole": false, 00:06:03.170 "seek_data": false, 00:06:03.170 "copy": true, 00:06:03.170 "nvme_iov_md": false 00:06:03.170 }, 00:06:03.170 "memory_domains": [ 00:06:03.170 { 00:06:03.170 "dma_device_id": "system", 00:06:03.170 "dma_device_type": 1 00:06:03.170 }, 00:06:03.170 { 00:06:03.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.170 "dma_device_type": 2 00:06:03.170 } 00:06:03.170 ], 00:06:03.170 "driver_specific": {} 00:06:03.170 }, 00:06:03.170 { 00:06:03.170 "name": "Passthru0", 00:06:03.170 "aliases": [ 00:06:03.170 "74681823-767e-533e-9a2c-90b40964207a" 00:06:03.170 ], 00:06:03.170 "product_name": "passthru", 00:06:03.170 "block_size": 512, 00:06:03.170 "num_blocks": 16384, 00:06:03.170 "uuid": "74681823-767e-533e-9a2c-90b40964207a", 00:06:03.170 "assigned_rate_limits": { 00:06:03.170 "rw_ios_per_sec": 0, 00:06:03.170 "rw_mbytes_per_sec": 0, 00:06:03.170 "r_mbytes_per_sec": 0, 00:06:03.170 "w_mbytes_per_sec": 0 00:06:03.170 }, 00:06:03.170 "claimed": false, 00:06:03.170 "zoned": false, 00:06:03.170 "supported_io_types": { 00:06:03.170 "read": true, 00:06:03.170 "write": true, 00:06:03.170 "unmap": true, 00:06:03.170 "flush": true, 00:06:03.170 "reset": true, 00:06:03.170 "nvme_admin": false, 00:06:03.170 "nvme_io": false, 00:06:03.170 "nvme_io_md": false, 00:06:03.170 "write_zeroes": true, 00:06:03.170 "zcopy": true, 00:06:03.170 "get_zone_info": false, 00:06:03.170 "zone_management": false, 00:06:03.170 "zone_append": false, 00:06:03.170 "compare": false, 00:06:03.170 "compare_and_write": false, 00:06:03.170 "abort": true, 00:06:03.170 "seek_hole": false, 00:06:03.170 "seek_data": false, 00:06:03.170 "copy": true, 00:06:03.170 "nvme_iov_md": false 00:06:03.170 }, 00:06:03.170 "memory_domains": [ 00:06:03.170 { 00:06:03.170 "dma_device_id": "system", 00:06:03.170 "dma_device_type": 1 00:06:03.170 }, 00:06:03.170 { 00:06:03.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.170 "dma_device_type": 2 00:06:03.170 } 00:06:03.170 ], 00:06:03.170 "driver_specific": { 00:06:03.170 "passthru": { 00:06:03.170 "name": "Passthru0", 00:06:03.170 "base_bdev_name": "Malloc2" 00:06:03.170 } 00:06:03.170 } 00:06:03.170 } 00:06:03.170 ]' 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:03.170 03:56:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:03.430 ************************************ 00:06:03.430 END TEST rpc_daemon_integrity 00:06:03.430 ************************************ 00:06:03.430 03:56:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:03.430 00:06:03.430 real 0m0.357s 00:06:03.430 user 0m0.199s 00:06:03.430 sys 0m0.055s 00:06:03.430 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.430 03:56:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.430 03:56:56 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:03.430 03:56:56 rpc -- rpc/rpc.sh@84 -- # killprocess 57011 00:06:03.430 03:56:56 rpc -- common/autotest_common.sh@954 -- # '[' -z 57011 ']' 00:06:03.430 03:56:56 rpc -- common/autotest_common.sh@958 -- # kill -0 57011 00:06:03.430 03:56:56 rpc -- common/autotest_common.sh@959 -- # uname 00:06:03.430 03:56:56 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.430 03:56:56 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57011 00:06:03.430 killing process with pid 57011 00:06:03.430 03:56:56 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.430 03:56:56 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.430 03:56:56 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57011' 00:06:03.430 03:56:56 rpc -- common/autotest_common.sh@973 -- # kill 57011 00:06:03.430 03:56:56 rpc -- common/autotest_common.sh@978 -- # wait 57011 00:06:05.981 ************************************ 00:06:05.981 END TEST rpc 00:06:05.981 ************************************ 00:06:05.981 00:06:05.981 real 0m5.404s 00:06:05.981 user 0m5.970s 00:06:05.981 sys 0m0.903s 00:06:05.981 03:56:59 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.981 03:56:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.981 03:56:59 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:05.981 03:56:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.981 03:56:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.981 03:56:59 -- common/autotest_common.sh@10 -- # set +x 00:06:05.981 ************************************ 00:06:05.981 START TEST skip_rpc 00:06:05.981 ************************************ 00:06:05.981 03:56:59 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:05.981 * Looking for test storage... 00:06:05.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:05.981 03:56:59 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:05.981 03:56:59 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:05.981 03:56:59 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:06.241 03:56:59 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.241 03:56:59 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:06.241 03:56:59 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.241 03:56:59 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:06.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.241 --rc genhtml_branch_coverage=1 00:06:06.241 --rc genhtml_function_coverage=1 00:06:06.241 --rc genhtml_legend=1 00:06:06.241 --rc geninfo_all_blocks=1 00:06:06.241 --rc geninfo_unexecuted_blocks=1 00:06:06.241 00:06:06.241 ' 00:06:06.241 03:56:59 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:06.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.241 --rc genhtml_branch_coverage=1 00:06:06.241 --rc genhtml_function_coverage=1 00:06:06.241 --rc genhtml_legend=1 00:06:06.241 --rc geninfo_all_blocks=1 00:06:06.241 --rc geninfo_unexecuted_blocks=1 00:06:06.241 00:06:06.241 ' 00:06:06.241 03:56:59 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:06.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.241 --rc genhtml_branch_coverage=1 00:06:06.241 --rc genhtml_function_coverage=1 00:06:06.241 --rc genhtml_legend=1 00:06:06.241 --rc geninfo_all_blocks=1 00:06:06.241 --rc geninfo_unexecuted_blocks=1 00:06:06.241 00:06:06.241 ' 00:06:06.241 03:56:59 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:06.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.241 --rc genhtml_branch_coverage=1 00:06:06.241 --rc genhtml_function_coverage=1 00:06:06.241 --rc genhtml_legend=1 00:06:06.241 --rc geninfo_all_blocks=1 00:06:06.241 --rc geninfo_unexecuted_blocks=1 00:06:06.241 00:06:06.241 ' 00:06:06.241 03:56:59 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:06.241 03:56:59 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:06.241 03:56:59 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:06.241 03:56:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.241 03:56:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.241 03:56:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.241 ************************************ 00:06:06.241 START TEST skip_rpc 00:06:06.241 ************************************ 00:06:06.241 03:56:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:06.241 03:56:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57246 00:06:06.241 03:56:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:06.241 03:56:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.241 03:56:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:06.241 [2024-12-06 03:56:59.494534] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:06:06.241 [2024-12-06 03:56:59.494754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57246 ] 00:06:06.526 [2024-12-06 03:56:59.669845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.526 [2024-12-06 03:56:59.785793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57246 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57246 ']' 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57246 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57246 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.805 killing process with pid 57246 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57246' 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57246 00:06:11.805 03:57:04 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57246 00:06:13.708 00:06:13.708 real 0m7.447s 00:06:13.708 user 0m6.976s 00:06:13.708 sys 0m0.392s 00:06:13.708 ************************************ 00:06:13.708 END TEST skip_rpc 00:06:13.708 ************************************ 00:06:13.708 03:57:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.708 03:57:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.708 03:57:06 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:13.708 03:57:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.708 03:57:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.708 03:57:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.708 ************************************ 00:06:13.708 START TEST skip_rpc_with_json 00:06:13.708 ************************************ 00:06:13.708 03:57:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:13.708 03:57:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:13.708 03:57:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57350 00:06:13.708 03:57:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.708 03:57:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.708 03:57:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57350 00:06:13.708 03:57:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57350 ']' 00:06:13.708 03:57:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.708 03:57:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.708 03:57:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.708 03:57:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.708 03:57:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.708 [2024-12-06 03:57:07.006932] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:06:13.708 [2024-12-06 03:57:07.007093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57350 ] 00:06:13.967 [2024-12-06 03:57:07.179975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.967 [2024-12-06 03:57:07.294499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.906 03:57:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.906 03:57:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:14.906 03:57:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:14.906 03:57:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.906 03:57:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:14.906 [2024-12-06 03:57:08.181530] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:14.906 request: 00:06:14.906 { 00:06:14.906 "trtype": "tcp", 00:06:14.906 "method": "nvmf_get_transports", 00:06:14.906 "req_id": 1 00:06:14.906 } 00:06:14.906 Got JSON-RPC error response 00:06:14.906 response: 00:06:14.906 { 00:06:14.906 "code": -19, 00:06:14.906 "message": "No such device" 00:06:14.906 } 00:06:14.906 03:57:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:14.906 03:57:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:14.906 03:57:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.906 03:57:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:14.906 [2024-12-06 03:57:08.197623] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:14.906 03:57:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.906 03:57:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:14.906 03:57:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.906 03:57:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:15.170 03:57:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.170 03:57:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:15.170 { 00:06:15.170 "subsystems": [ 00:06:15.170 { 00:06:15.170 "subsystem": "fsdev", 00:06:15.170 "config": [ 00:06:15.170 { 00:06:15.170 "method": "fsdev_set_opts", 00:06:15.170 "params": { 00:06:15.170 "fsdev_io_pool_size": 65535, 00:06:15.170 "fsdev_io_cache_size": 256 00:06:15.170 } 00:06:15.170 } 00:06:15.170 ] 00:06:15.170 }, 00:06:15.170 { 00:06:15.170 "subsystem": "keyring", 00:06:15.170 "config": [] 00:06:15.170 }, 00:06:15.170 { 00:06:15.170 "subsystem": "iobuf", 00:06:15.170 "config": [ 00:06:15.170 { 00:06:15.170 "method": "iobuf_set_options", 00:06:15.170 "params": { 00:06:15.170 "small_pool_count": 8192, 00:06:15.170 "large_pool_count": 1024, 00:06:15.170 "small_bufsize": 8192, 00:06:15.170 "large_bufsize": 135168, 00:06:15.170 "enable_numa": false 00:06:15.170 } 00:06:15.170 } 00:06:15.170 ] 00:06:15.170 }, 00:06:15.170 { 00:06:15.170 "subsystem": "sock", 00:06:15.170 "config": [ 00:06:15.170 { 00:06:15.170 "method": "sock_set_default_impl", 00:06:15.170 "params": { 00:06:15.170 "impl_name": "posix" 00:06:15.170 } 00:06:15.170 }, 00:06:15.170 { 00:06:15.170 "method": "sock_impl_set_options", 00:06:15.170 "params": { 00:06:15.170 "impl_name": "ssl", 00:06:15.170 "recv_buf_size": 4096, 00:06:15.170 "send_buf_size": 4096, 00:06:15.170 "enable_recv_pipe": true, 00:06:15.170 "enable_quickack": false, 00:06:15.170 "enable_placement_id": 0, 00:06:15.170 "enable_zerocopy_send_server": true, 00:06:15.170 "enable_zerocopy_send_client": false, 00:06:15.170 "zerocopy_threshold": 0, 00:06:15.170 "tls_version": 0, 00:06:15.170 "enable_ktls": false 00:06:15.170 } 00:06:15.170 }, 00:06:15.170 { 00:06:15.170 "method": "sock_impl_set_options", 00:06:15.170 "params": { 00:06:15.170 "impl_name": "posix", 00:06:15.170 "recv_buf_size": 2097152, 00:06:15.170 "send_buf_size": 2097152, 00:06:15.170 "enable_recv_pipe": true, 00:06:15.170 "enable_quickack": false, 00:06:15.170 "enable_placement_id": 0, 00:06:15.170 "enable_zerocopy_send_server": true, 00:06:15.170 "enable_zerocopy_send_client": false, 00:06:15.170 "zerocopy_threshold": 0, 00:06:15.170 "tls_version": 0, 00:06:15.170 "enable_ktls": false 00:06:15.170 } 00:06:15.170 } 00:06:15.170 ] 00:06:15.170 }, 00:06:15.170 { 00:06:15.170 "subsystem": "vmd", 00:06:15.170 "config": [] 00:06:15.170 }, 00:06:15.170 { 00:06:15.170 "subsystem": "accel", 00:06:15.170 "config": [ 00:06:15.170 { 00:06:15.170 "method": "accel_set_options", 00:06:15.170 "params": { 00:06:15.170 "small_cache_size": 128, 00:06:15.170 "large_cache_size": 16, 00:06:15.170 "task_count": 2048, 00:06:15.170 "sequence_count": 2048, 00:06:15.170 "buf_count": 2048 00:06:15.170 } 00:06:15.170 } 00:06:15.170 ] 00:06:15.170 }, 00:06:15.170 { 00:06:15.170 "subsystem": "bdev", 00:06:15.170 "config": [ 00:06:15.170 { 00:06:15.170 "method": "bdev_set_options", 00:06:15.170 "params": { 00:06:15.170 "bdev_io_pool_size": 65535, 00:06:15.170 "bdev_io_cache_size": 256, 00:06:15.170 "bdev_auto_examine": true, 00:06:15.170 "iobuf_small_cache_size": 128, 00:06:15.170 "iobuf_large_cache_size": 16 00:06:15.170 } 00:06:15.170 }, 00:06:15.170 { 00:06:15.170 "method": "bdev_raid_set_options", 00:06:15.170 "params": { 00:06:15.170 "process_window_size_kb": 1024, 00:06:15.170 "process_max_bandwidth_mb_sec": 0 00:06:15.170 } 00:06:15.170 }, 00:06:15.170 { 00:06:15.170 "method": "bdev_iscsi_set_options", 00:06:15.170 "params": { 00:06:15.170 "timeout_sec": 30 00:06:15.170 } 00:06:15.170 }, 00:06:15.170 { 00:06:15.170 "method": "bdev_nvme_set_options", 00:06:15.170 "params": { 00:06:15.170 "action_on_timeout": "none", 00:06:15.170 "timeout_us": 0, 00:06:15.170 "timeout_admin_us": 0, 00:06:15.170 "keep_alive_timeout_ms": 10000, 00:06:15.170 "arbitration_burst": 0, 00:06:15.170 "low_priority_weight": 0, 00:06:15.170 "medium_priority_weight": 0, 00:06:15.170 "high_priority_weight": 0, 00:06:15.170 "nvme_adminq_poll_period_us": 10000, 00:06:15.170 "nvme_ioq_poll_period_us": 0, 00:06:15.170 "io_queue_requests": 0, 00:06:15.170 "delay_cmd_submit": true, 00:06:15.170 "transport_retry_count": 4, 00:06:15.170 "bdev_retry_count": 3, 00:06:15.170 "transport_ack_timeout": 0, 00:06:15.170 "ctrlr_loss_timeout_sec": 0, 00:06:15.170 "reconnect_delay_sec": 0, 00:06:15.170 "fast_io_fail_timeout_sec": 0, 00:06:15.170 "disable_auto_failback": false, 00:06:15.170 "generate_uuids": false, 00:06:15.170 "transport_tos": 0, 00:06:15.170 "nvme_error_stat": false, 00:06:15.170 "rdma_srq_size": 0, 00:06:15.170 "io_path_stat": false, 00:06:15.170 "allow_accel_sequence": false, 00:06:15.170 "rdma_max_cq_size": 0, 00:06:15.170 "rdma_cm_event_timeout_ms": 0, 00:06:15.170 "dhchap_digests": [ 00:06:15.170 "sha256", 00:06:15.170 "sha384", 00:06:15.170 "sha512" 00:06:15.170 ], 00:06:15.170 "dhchap_dhgroups": [ 00:06:15.170 "null", 00:06:15.170 "ffdhe2048", 00:06:15.170 "ffdhe3072", 00:06:15.170 "ffdhe4096", 00:06:15.170 "ffdhe6144", 00:06:15.170 "ffdhe8192" 00:06:15.170 ] 00:06:15.170 } 00:06:15.170 }, 00:06:15.170 { 00:06:15.170 "method": "bdev_nvme_set_hotplug", 00:06:15.170 "params": { 00:06:15.170 "period_us": 100000, 00:06:15.170 "enable": false 00:06:15.170 } 00:06:15.170 }, 00:06:15.170 { 00:06:15.170 "method": "bdev_wait_for_examine" 00:06:15.170 } 00:06:15.170 ] 00:06:15.170 }, 00:06:15.170 { 00:06:15.170 "subsystem": "scsi", 00:06:15.170 "config": null 00:06:15.170 }, 00:06:15.170 { 00:06:15.170 "subsystem": "scheduler", 00:06:15.170 "config": [ 00:06:15.170 { 00:06:15.170 "method": "framework_set_scheduler", 00:06:15.170 "params": { 00:06:15.170 "name": "static" 00:06:15.170 } 00:06:15.170 } 00:06:15.170 ] 00:06:15.170 }, 00:06:15.170 { 00:06:15.170 "subsystem": "vhost_scsi", 00:06:15.170 "config": [] 00:06:15.170 }, 00:06:15.170 { 00:06:15.170 "subsystem": "vhost_blk", 00:06:15.170 "config": [] 00:06:15.170 }, 00:06:15.170 { 00:06:15.170 "subsystem": "ublk", 00:06:15.170 "config": [] 00:06:15.170 }, 00:06:15.170 { 00:06:15.170 "subsystem": "nbd", 00:06:15.170 "config": [] 00:06:15.170 }, 00:06:15.170 { 00:06:15.170 "subsystem": "nvmf", 00:06:15.170 "config": [ 00:06:15.170 { 00:06:15.170 "method": "nvmf_set_config", 00:06:15.170 "params": { 00:06:15.170 "discovery_filter": "match_any", 00:06:15.170 "admin_cmd_passthru": { 00:06:15.170 "identify_ctrlr": false 00:06:15.170 }, 00:06:15.170 "dhchap_digests": [ 00:06:15.170 "sha256", 00:06:15.170 "sha384", 00:06:15.170 "sha512" 00:06:15.170 ], 00:06:15.170 "dhchap_dhgroups": [ 00:06:15.170 "null", 00:06:15.170 "ffdhe2048", 00:06:15.170 "ffdhe3072", 00:06:15.170 "ffdhe4096", 00:06:15.170 "ffdhe6144", 00:06:15.170 "ffdhe8192" 00:06:15.170 ] 00:06:15.170 } 00:06:15.170 }, 00:06:15.170 { 00:06:15.171 "method": "nvmf_set_max_subsystems", 00:06:15.171 "params": { 00:06:15.171 "max_subsystems": 1024 00:06:15.171 } 00:06:15.171 }, 00:06:15.171 { 00:06:15.171 "method": "nvmf_set_crdt", 00:06:15.171 "params": { 00:06:15.171 "crdt1": 0, 00:06:15.171 "crdt2": 0, 00:06:15.171 "crdt3": 0 00:06:15.171 } 00:06:15.171 }, 00:06:15.171 { 00:06:15.171 "method": "nvmf_create_transport", 00:06:15.171 "params": { 00:06:15.171 "trtype": "TCP", 00:06:15.171 "max_queue_depth": 128, 00:06:15.171 "max_io_qpairs_per_ctrlr": 127, 00:06:15.171 "in_capsule_data_size": 4096, 00:06:15.171 "max_io_size": 131072, 00:06:15.171 "io_unit_size": 131072, 00:06:15.171 "max_aq_depth": 128, 00:06:15.171 "num_shared_buffers": 511, 00:06:15.171 "buf_cache_size": 4294967295, 00:06:15.171 "dif_insert_or_strip": false, 00:06:15.171 "zcopy": false, 00:06:15.171 "c2h_success": true, 00:06:15.171 "sock_priority": 0, 00:06:15.171 "abort_timeout_sec": 1, 00:06:15.171 "ack_timeout": 0, 00:06:15.171 "data_wr_pool_size": 0 00:06:15.171 } 00:06:15.171 } 00:06:15.171 ] 00:06:15.171 }, 00:06:15.171 { 00:06:15.171 "subsystem": "iscsi", 00:06:15.171 "config": [ 00:06:15.171 { 00:06:15.171 "method": "iscsi_set_options", 00:06:15.171 "params": { 00:06:15.171 "node_base": "iqn.2016-06.io.spdk", 00:06:15.171 "max_sessions": 128, 00:06:15.171 "max_connections_per_session": 2, 00:06:15.171 "max_queue_depth": 64, 00:06:15.171 "default_time2wait": 2, 00:06:15.171 "default_time2retain": 20, 00:06:15.171 "first_burst_length": 8192, 00:06:15.171 "immediate_data": true, 00:06:15.171 "allow_duplicated_isid": false, 00:06:15.171 "error_recovery_level": 0, 00:06:15.171 "nop_timeout": 60, 00:06:15.171 "nop_in_interval": 30, 00:06:15.171 "disable_chap": false, 00:06:15.171 "require_chap": false, 00:06:15.171 "mutual_chap": false, 00:06:15.171 "chap_group": 0, 00:06:15.171 "max_large_datain_per_connection": 64, 00:06:15.171 "max_r2t_per_connection": 4, 00:06:15.171 "pdu_pool_size": 36864, 00:06:15.171 "immediate_data_pool_size": 16384, 00:06:15.171 "data_out_pool_size": 2048 00:06:15.171 } 00:06:15.171 } 00:06:15.171 ] 00:06:15.171 } 00:06:15.171 ] 00:06:15.171 } 00:06:15.171 03:57:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:15.171 03:57:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57350 00:06:15.171 03:57:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57350 ']' 00:06:15.171 03:57:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57350 00:06:15.171 03:57:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:15.171 03:57:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.171 03:57:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57350 00:06:15.171 killing process with pid 57350 00:06:15.171 03:57:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.171 03:57:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.171 03:57:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57350' 00:06:15.171 03:57:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57350 00:06:15.171 03:57:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57350 00:06:17.708 03:57:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57406 00:06:17.708 03:57:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:17.708 03:57:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:22.976 03:57:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57406 00:06:22.976 03:57:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57406 ']' 00:06:22.976 03:57:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57406 00:06:22.976 03:57:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:22.976 03:57:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.976 03:57:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57406 00:06:22.976 killing process with pid 57406 00:06:22.976 03:57:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.976 03:57:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.976 03:57:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57406' 00:06:22.976 03:57:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57406 00:06:22.976 03:57:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57406 00:06:24.881 03:57:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:24.881 03:57:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:25.141 ************************************ 00:06:25.141 END TEST skip_rpc_with_json 00:06:25.141 ************************************ 00:06:25.141 00:06:25.141 real 0m11.335s 00:06:25.141 user 0m10.782s 00:06:25.141 sys 0m0.835s 00:06:25.141 03:57:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.141 03:57:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:25.141 03:57:18 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:25.141 03:57:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.141 03:57:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.141 03:57:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.141 ************************************ 00:06:25.141 START TEST skip_rpc_with_delay 00:06:25.141 ************************************ 00:06:25.141 03:57:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:25.141 03:57:18 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:25.141 03:57:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:25.141 03:57:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:25.141 03:57:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:25.141 03:57:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.141 03:57:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:25.141 03:57:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.141 03:57:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:25.141 03:57:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.141 03:57:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:25.141 03:57:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:25.141 03:57:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:25.141 [2024-12-06 03:57:18.400863] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:25.141 03:57:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:25.141 03:57:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:25.141 03:57:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:25.141 03:57:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:25.141 00:06:25.141 real 0m0.164s 00:06:25.141 user 0m0.096s 00:06:25.141 sys 0m0.066s 00:06:25.141 ************************************ 00:06:25.141 END TEST skip_rpc_with_delay 00:06:25.141 ************************************ 00:06:25.141 03:57:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.141 03:57:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:25.402 03:57:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:25.402 03:57:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:25.402 03:57:18 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:25.402 03:57:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.402 03:57:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.402 03:57:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.402 ************************************ 00:06:25.402 START TEST exit_on_failed_rpc_init 00:06:25.402 ************************************ 00:06:25.402 03:57:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:25.402 03:57:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57534 00:06:25.402 03:57:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.402 03:57:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57534 00:06:25.402 03:57:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57534 ']' 00:06:25.402 03:57:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.402 03:57:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.402 03:57:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.402 03:57:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.402 03:57:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:25.402 [2024-12-06 03:57:18.629657] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:06:25.402 [2024-12-06 03:57:18.629885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57534 ] 00:06:25.661 [2024-12-06 03:57:18.803265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.661 [2024-12-06 03:57:18.924035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.601 03:57:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.601 03:57:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:26.601 03:57:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.601 03:57:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:26.601 03:57:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:26.601 03:57:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:26.601 03:57:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.601 03:57:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.601 03:57:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.601 03:57:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.601 03:57:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.601 03:57:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.601 03:57:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.601 03:57:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:26.601 03:57:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:26.601 [2024-12-06 03:57:19.921088] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:06:26.601 [2024-12-06 03:57:19.921207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57552 ] 00:06:26.860 [2024-12-06 03:57:20.098128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.119 [2024-12-06 03:57:20.223371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.119 [2024-12-06 03:57:20.223646] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:27.119 [2024-12-06 03:57:20.223666] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:27.119 [2024-12-06 03:57:20.223677] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:27.379 03:57:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:27.380 03:57:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.380 03:57:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:27.380 03:57:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:27.380 03:57:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:27.380 03:57:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.380 03:57:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:27.380 03:57:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57534 00:06:27.380 03:57:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57534 ']' 00:06:27.380 03:57:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57534 00:06:27.380 03:57:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:27.380 03:57:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.380 03:57:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57534 00:06:27.380 03:57:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.380 03:57:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.380 03:57:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57534' 00:06:27.380 killing process with pid 57534 00:06:27.380 03:57:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57534 00:06:27.380 03:57:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57534 00:06:29.917 00:06:29.917 real 0m4.412s 00:06:29.917 user 0m4.771s 00:06:29.917 sys 0m0.554s 00:06:29.917 03:57:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.917 03:57:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:29.917 ************************************ 00:06:29.917 END TEST exit_on_failed_rpc_init 00:06:29.917 ************************************ 00:06:29.917 03:57:22 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:29.917 00:06:29.918 real 0m23.845s 00:06:29.918 user 0m22.847s 00:06:29.918 sys 0m2.127s 00:06:29.918 03:57:22 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.918 03:57:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.918 ************************************ 00:06:29.918 END TEST skip_rpc 00:06:29.918 ************************************ 00:06:29.918 03:57:23 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:29.918 03:57:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.918 03:57:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.918 03:57:23 -- common/autotest_common.sh@10 -- # set +x 00:06:29.918 ************************************ 00:06:29.918 START TEST rpc_client 00:06:29.918 ************************************ 00:06:29.918 03:57:23 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:29.918 * Looking for test storage... 00:06:29.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:29.918 03:57:23 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:29.918 03:57:23 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:06:29.918 03:57:23 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:29.918 03:57:23 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:29.918 03:57:23 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.918 03:57:23 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.918 03:57:23 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.918 03:57:23 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.918 03:57:23 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.918 03:57:23 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.918 03:57:23 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.918 03:57:23 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.918 03:57:23 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.918 03:57:23 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.918 03:57:23 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.918 03:57:23 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:29.918 03:57:23 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:29.918 03:57:23 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.918 03:57:23 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.918 03:57:23 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:30.177 03:57:23 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:30.177 03:57:23 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.177 03:57:23 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:30.177 03:57:23 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.177 03:57:23 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:30.177 03:57:23 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:30.177 03:57:23 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.177 03:57:23 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:30.177 03:57:23 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.177 03:57:23 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.177 03:57:23 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.177 03:57:23 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:30.177 03:57:23 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.177 03:57:23 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:30.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.177 --rc genhtml_branch_coverage=1 00:06:30.177 --rc genhtml_function_coverage=1 00:06:30.177 --rc genhtml_legend=1 00:06:30.177 --rc geninfo_all_blocks=1 00:06:30.177 --rc geninfo_unexecuted_blocks=1 00:06:30.177 00:06:30.177 ' 00:06:30.177 03:57:23 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:30.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.177 --rc genhtml_branch_coverage=1 00:06:30.177 --rc genhtml_function_coverage=1 00:06:30.177 --rc genhtml_legend=1 00:06:30.177 --rc geninfo_all_blocks=1 00:06:30.177 --rc geninfo_unexecuted_blocks=1 00:06:30.177 00:06:30.177 ' 00:06:30.178 03:57:23 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:30.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.178 --rc genhtml_branch_coverage=1 00:06:30.178 --rc genhtml_function_coverage=1 00:06:30.178 --rc genhtml_legend=1 00:06:30.178 --rc geninfo_all_blocks=1 00:06:30.178 --rc geninfo_unexecuted_blocks=1 00:06:30.178 00:06:30.178 ' 00:06:30.178 03:57:23 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:30.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.178 --rc genhtml_branch_coverage=1 00:06:30.178 --rc genhtml_function_coverage=1 00:06:30.178 --rc genhtml_legend=1 00:06:30.178 --rc geninfo_all_blocks=1 00:06:30.178 --rc geninfo_unexecuted_blocks=1 00:06:30.178 00:06:30.178 ' 00:06:30.178 03:57:23 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:30.178 OK 00:06:30.178 03:57:23 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:30.178 00:06:30.178 real 0m0.300s 00:06:30.178 user 0m0.152s 00:06:30.178 sys 0m0.165s 00:06:30.178 03:57:23 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.178 03:57:23 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:30.178 ************************************ 00:06:30.178 END TEST rpc_client 00:06:30.178 ************************************ 00:06:30.178 03:57:23 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:30.178 03:57:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.178 03:57:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.178 03:57:23 -- common/autotest_common.sh@10 -- # set +x 00:06:30.178 ************************************ 00:06:30.178 START TEST json_config 00:06:30.178 ************************************ 00:06:30.178 03:57:23 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:30.178 03:57:23 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:30.178 03:57:23 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:06:30.178 03:57:23 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:30.437 03:57:23 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:30.437 03:57:23 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.437 03:57:23 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.437 03:57:23 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.437 03:57:23 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.437 03:57:23 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.437 03:57:23 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.437 03:57:23 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.437 03:57:23 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.437 03:57:23 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.437 03:57:23 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.437 03:57:23 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.437 03:57:23 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:30.437 03:57:23 json_config -- scripts/common.sh@345 -- # : 1 00:06:30.437 03:57:23 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.437 03:57:23 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.437 03:57:23 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:30.437 03:57:23 json_config -- scripts/common.sh@353 -- # local d=1 00:06:30.437 03:57:23 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.437 03:57:23 json_config -- scripts/common.sh@355 -- # echo 1 00:06:30.437 03:57:23 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.437 03:57:23 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:30.437 03:57:23 json_config -- scripts/common.sh@353 -- # local d=2 00:06:30.437 03:57:23 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.437 03:57:23 json_config -- scripts/common.sh@355 -- # echo 2 00:06:30.437 03:57:23 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.437 03:57:23 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.437 03:57:23 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.437 03:57:23 json_config -- scripts/common.sh@368 -- # return 0 00:06:30.437 03:57:23 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.437 03:57:23 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:30.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.437 --rc genhtml_branch_coverage=1 00:06:30.437 --rc genhtml_function_coverage=1 00:06:30.437 --rc genhtml_legend=1 00:06:30.437 --rc geninfo_all_blocks=1 00:06:30.437 --rc geninfo_unexecuted_blocks=1 00:06:30.437 00:06:30.437 ' 00:06:30.437 03:57:23 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:30.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.437 --rc genhtml_branch_coverage=1 00:06:30.437 --rc genhtml_function_coverage=1 00:06:30.437 --rc genhtml_legend=1 00:06:30.437 --rc geninfo_all_blocks=1 00:06:30.437 --rc geninfo_unexecuted_blocks=1 00:06:30.437 00:06:30.437 ' 00:06:30.437 03:57:23 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:30.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.437 --rc genhtml_branch_coverage=1 00:06:30.437 --rc genhtml_function_coverage=1 00:06:30.437 --rc genhtml_legend=1 00:06:30.437 --rc geninfo_all_blocks=1 00:06:30.437 --rc geninfo_unexecuted_blocks=1 00:06:30.437 00:06:30.437 ' 00:06:30.437 03:57:23 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:30.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.437 --rc genhtml_branch_coverage=1 00:06:30.437 --rc genhtml_function_coverage=1 00:06:30.437 --rc genhtml_legend=1 00:06:30.437 --rc geninfo_all_blocks=1 00:06:30.437 --rc geninfo_unexecuted_blocks=1 00:06:30.437 00:06:30.437 ' 00:06:30.437 03:57:23 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:51869f7d-801f-4513-a29b-682aef1a1ed9 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=51869f7d-801f-4513-a29b-682aef1a1ed9 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:30.437 03:57:23 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.437 03:57:23 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.437 03:57:23 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.437 03:57:23 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.437 03:57:23 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.437 03:57:23 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.437 03:57:23 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.437 03:57:23 json_config -- paths/export.sh@5 -- # export PATH 00:06:30.437 03:57:23 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@51 -- # : 0 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:30.437 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:30.437 03:57:23 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:30.438 03:57:23 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:30.438 03:57:23 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:30.438 03:57:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:30.438 03:57:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:30.438 03:57:23 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:30.438 03:57:23 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:30.438 WARNING: No tests are enabled so not running JSON configuration tests 00:06:30.438 03:57:23 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:30.438 00:06:30.438 real 0m0.213s 00:06:30.438 user 0m0.126s 00:06:30.438 sys 0m0.090s 00:06:30.438 03:57:23 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.438 03:57:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.438 ************************************ 00:06:30.438 END TEST json_config 00:06:30.438 ************************************ 00:06:30.438 03:57:23 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:30.438 03:57:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.438 03:57:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.438 03:57:23 -- common/autotest_common.sh@10 -- # set +x 00:06:30.438 ************************************ 00:06:30.438 START TEST json_config_extra_key 00:06:30.438 ************************************ 00:06:30.438 03:57:23 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:30.697 03:57:23 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:30.697 03:57:23 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:06:30.697 03:57:23 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:30.697 03:57:23 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:30.697 03:57:23 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.697 03:57:23 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.697 03:57:23 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.697 03:57:23 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.697 03:57:23 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.697 03:57:23 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.697 03:57:23 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.697 03:57:23 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.697 03:57:23 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.697 03:57:23 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.698 03:57:23 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.698 03:57:23 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:30.698 03:57:23 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:30.698 03:57:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.698 03:57:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.698 03:57:23 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:30.698 03:57:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:30.698 03:57:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.698 03:57:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:30.698 03:57:23 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.698 03:57:23 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:30.698 03:57:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:30.698 03:57:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.698 03:57:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:30.698 03:57:23 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.698 03:57:23 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.698 03:57:23 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.698 03:57:23 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:30.698 03:57:23 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.698 03:57:23 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:30.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.698 --rc genhtml_branch_coverage=1 00:06:30.698 --rc genhtml_function_coverage=1 00:06:30.698 --rc genhtml_legend=1 00:06:30.698 --rc geninfo_all_blocks=1 00:06:30.698 --rc geninfo_unexecuted_blocks=1 00:06:30.698 00:06:30.698 ' 00:06:30.698 03:57:23 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:30.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.698 --rc genhtml_branch_coverage=1 00:06:30.698 --rc genhtml_function_coverage=1 00:06:30.698 --rc genhtml_legend=1 00:06:30.698 --rc geninfo_all_blocks=1 00:06:30.698 --rc geninfo_unexecuted_blocks=1 00:06:30.698 00:06:30.698 ' 00:06:30.698 03:57:23 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:30.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.698 --rc genhtml_branch_coverage=1 00:06:30.698 --rc genhtml_function_coverage=1 00:06:30.698 --rc genhtml_legend=1 00:06:30.698 --rc geninfo_all_blocks=1 00:06:30.698 --rc geninfo_unexecuted_blocks=1 00:06:30.698 00:06:30.698 ' 00:06:30.698 03:57:23 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:30.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.698 --rc genhtml_branch_coverage=1 00:06:30.698 --rc genhtml_function_coverage=1 00:06:30.698 --rc genhtml_legend=1 00:06:30.698 --rc geninfo_all_blocks=1 00:06:30.698 --rc geninfo_unexecuted_blocks=1 00:06:30.698 00:06:30.698 ' 00:06:30.698 03:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:51869f7d-801f-4513-a29b-682aef1a1ed9 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=51869f7d-801f-4513-a29b-682aef1a1ed9 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:30.698 03:57:23 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.698 03:57:23 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.698 03:57:23 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.698 03:57:23 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.698 03:57:23 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.698 03:57:23 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.698 03:57:23 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.698 03:57:23 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:30.698 03:57:23 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:30.698 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:30.698 03:57:23 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:30.698 03:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:30.698 03:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:30.698 03:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:30.698 03:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:30.698 03:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:30.698 03:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:30.698 03:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:30.698 03:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:30.698 03:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:30.698 03:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:30.698 INFO: launching applications... 00:06:30.698 03:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:30.698 03:57:23 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:30.698 03:57:23 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:30.698 03:57:23 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:30.698 03:57:23 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:30.698 03:57:23 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:30.698 03:57:23 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:30.698 03:57:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.698 03:57:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.698 03:57:23 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57762 00:06:30.698 Waiting for target to run... 00:06:30.698 03:57:23 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:30.698 03:57:23 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57762 /var/tmp/spdk_tgt.sock 00:06:30.698 03:57:23 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57762 ']' 00:06:30.698 03:57:23 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:30.698 03:57:23 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.698 03:57:23 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:30.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:30.698 03:57:23 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:30.699 03:57:23 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.699 03:57:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:30.699 [2024-12-06 03:57:24.032626] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:06:30.699 [2024-12-06 03:57:24.032753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57762 ] 00:06:31.267 [2024-12-06 03:57:24.417274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.267 [2024-12-06 03:57:24.522301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.206 03:57:25 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.206 03:57:25 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:32.206 00:06:32.206 03:57:25 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:32.206 INFO: shutting down applications... 00:06:32.206 03:57:25 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:32.206 03:57:25 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:32.206 03:57:25 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:32.206 03:57:25 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:32.206 03:57:25 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57762 ]] 00:06:32.206 03:57:25 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57762 00:06:32.206 03:57:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:32.206 03:57:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:32.206 03:57:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57762 00:06:32.206 03:57:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:32.464 03:57:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:32.464 03:57:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:32.464 03:57:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57762 00:06:32.464 03:57:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:33.030 03:57:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:33.030 03:57:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:33.030 03:57:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57762 00:06:33.030 03:57:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:33.597 03:57:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:33.597 03:57:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:33.597 03:57:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57762 00:06:33.597 03:57:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:34.231 03:57:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:34.231 03:57:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:34.231 03:57:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57762 00:06:34.231 03:57:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:34.506 03:57:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:34.506 03:57:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:34.506 03:57:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57762 00:06:34.506 03:57:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:35.074 03:57:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:35.074 03:57:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:35.074 03:57:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57762 00:06:35.074 03:57:28 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:35.074 03:57:28 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:35.074 03:57:28 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:35.074 SPDK target shutdown done 00:06:35.074 03:57:28 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:35.074 Success 00:06:35.074 03:57:28 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:35.074 ************************************ 00:06:35.074 END TEST json_config_extra_key 00:06:35.074 ************************************ 00:06:35.074 00:06:35.074 real 0m4.587s 00:06:35.074 user 0m4.145s 00:06:35.074 sys 0m0.550s 00:06:35.074 03:57:28 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.074 03:57:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:35.074 03:57:28 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:35.074 03:57:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.074 03:57:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.074 03:57:28 -- common/autotest_common.sh@10 -- # set +x 00:06:35.074 ************************************ 00:06:35.074 START TEST alias_rpc 00:06:35.074 ************************************ 00:06:35.074 03:57:28 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:35.334 * Looking for test storage... 00:06:35.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:35.334 03:57:28 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:35.334 03:57:28 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:35.334 03:57:28 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:35.334 03:57:28 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.334 03:57:28 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:35.334 03:57:28 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.334 03:57:28 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:35.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.334 --rc genhtml_branch_coverage=1 00:06:35.334 --rc genhtml_function_coverage=1 00:06:35.334 --rc genhtml_legend=1 00:06:35.334 --rc geninfo_all_blocks=1 00:06:35.334 --rc geninfo_unexecuted_blocks=1 00:06:35.334 00:06:35.334 ' 00:06:35.334 03:57:28 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:35.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.334 --rc genhtml_branch_coverage=1 00:06:35.334 --rc genhtml_function_coverage=1 00:06:35.334 --rc genhtml_legend=1 00:06:35.334 --rc geninfo_all_blocks=1 00:06:35.334 --rc geninfo_unexecuted_blocks=1 00:06:35.334 00:06:35.334 ' 00:06:35.334 03:57:28 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:35.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.334 --rc genhtml_branch_coverage=1 00:06:35.334 --rc genhtml_function_coverage=1 00:06:35.334 --rc genhtml_legend=1 00:06:35.334 --rc geninfo_all_blocks=1 00:06:35.334 --rc geninfo_unexecuted_blocks=1 00:06:35.334 00:06:35.334 ' 00:06:35.334 03:57:28 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:35.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.334 --rc genhtml_branch_coverage=1 00:06:35.334 --rc genhtml_function_coverage=1 00:06:35.334 --rc genhtml_legend=1 00:06:35.334 --rc geninfo_all_blocks=1 00:06:35.334 --rc geninfo_unexecuted_blocks=1 00:06:35.334 00:06:35.334 ' 00:06:35.334 03:57:28 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:35.334 03:57:28 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57879 00:06:35.334 03:57:28 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:35.334 03:57:28 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57879 00:06:35.334 03:57:28 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57879 ']' 00:06:35.334 03:57:28 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.334 03:57:28 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.334 03:57:28 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.334 03:57:28 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.334 03:57:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.334 [2024-12-06 03:57:28.682181] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:06:35.334 [2024-12-06 03:57:28.682325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57879 ] 00:06:35.593 [2024-12-06 03:57:28.843630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.852 [2024-12-06 03:57:28.958034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.790 03:57:29 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.790 03:57:29 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:36.790 03:57:29 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:36.790 03:57:30 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57879 00:06:36.790 03:57:30 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57879 ']' 00:06:36.790 03:57:30 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57879 00:06:36.790 03:57:30 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:36.790 03:57:30 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.790 03:57:30 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57879 00:06:36.790 03:57:30 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.790 killing process with pid 57879 00:06:36.790 03:57:30 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.790 03:57:30 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57879' 00:06:36.790 03:57:30 alias_rpc -- common/autotest_common.sh@973 -- # kill 57879 00:06:36.790 03:57:30 alias_rpc -- common/autotest_common.sh@978 -- # wait 57879 00:06:39.324 00:06:39.324 real 0m4.237s 00:06:39.324 user 0m4.260s 00:06:39.324 sys 0m0.562s 00:06:39.324 03:57:32 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.324 03:57:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.324 ************************************ 00:06:39.324 END TEST alias_rpc 00:06:39.324 ************************************ 00:06:39.324 03:57:32 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:39.324 03:57:32 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:39.324 03:57:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.324 03:57:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.324 03:57:32 -- common/autotest_common.sh@10 -- # set +x 00:06:39.324 ************************************ 00:06:39.324 START TEST spdkcli_tcp 00:06:39.324 ************************************ 00:06:39.324 03:57:32 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:39.581 * Looking for test storage... 00:06:39.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:39.581 03:57:32 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:39.581 03:57:32 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:39.581 03:57:32 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:39.581 03:57:32 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.581 03:57:32 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:39.581 03:57:32 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.581 03:57:32 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:39.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.581 --rc genhtml_branch_coverage=1 00:06:39.581 --rc genhtml_function_coverage=1 00:06:39.581 --rc genhtml_legend=1 00:06:39.581 --rc geninfo_all_blocks=1 00:06:39.581 --rc geninfo_unexecuted_blocks=1 00:06:39.581 00:06:39.581 ' 00:06:39.581 03:57:32 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:39.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.581 --rc genhtml_branch_coverage=1 00:06:39.581 --rc genhtml_function_coverage=1 00:06:39.581 --rc genhtml_legend=1 00:06:39.581 --rc geninfo_all_blocks=1 00:06:39.581 --rc geninfo_unexecuted_blocks=1 00:06:39.581 00:06:39.581 ' 00:06:39.581 03:57:32 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:39.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.581 --rc genhtml_branch_coverage=1 00:06:39.581 --rc genhtml_function_coverage=1 00:06:39.581 --rc genhtml_legend=1 00:06:39.581 --rc geninfo_all_blocks=1 00:06:39.581 --rc geninfo_unexecuted_blocks=1 00:06:39.581 00:06:39.581 ' 00:06:39.581 03:57:32 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:39.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.581 --rc genhtml_branch_coverage=1 00:06:39.581 --rc genhtml_function_coverage=1 00:06:39.581 --rc genhtml_legend=1 00:06:39.581 --rc geninfo_all_blocks=1 00:06:39.581 --rc geninfo_unexecuted_blocks=1 00:06:39.581 00:06:39.581 ' 00:06:39.581 03:57:32 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:39.581 03:57:32 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:39.581 03:57:32 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:39.581 03:57:32 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:39.581 03:57:32 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:39.581 03:57:32 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:39.581 03:57:32 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:39.581 03:57:32 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:39.581 03:57:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:39.581 03:57:32 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:39.581 03:57:32 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57986 00:06:39.581 03:57:32 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57986 00:06:39.581 03:57:32 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57986 ']' 00:06:39.581 03:57:32 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.582 03:57:32 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.582 03:57:32 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.582 03:57:32 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.582 03:57:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:39.840 [2024-12-06 03:57:32.941055] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:06:39.840 [2024-12-06 03:57:32.941199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57986 ] 00:06:39.840 [2024-12-06 03:57:33.116663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.097 [2024-12-06 03:57:33.238507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.097 [2024-12-06 03:57:33.238552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.031 03:57:34 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.031 03:57:34 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:41.031 03:57:34 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:41.031 03:57:34 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58003 00:06:41.031 03:57:34 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:41.031 [ 00:06:41.031 "bdev_malloc_delete", 00:06:41.031 "bdev_malloc_create", 00:06:41.031 "bdev_null_resize", 00:06:41.031 "bdev_null_delete", 00:06:41.031 "bdev_null_create", 00:06:41.031 "bdev_nvme_cuse_unregister", 00:06:41.031 "bdev_nvme_cuse_register", 00:06:41.031 "bdev_opal_new_user", 00:06:41.031 "bdev_opal_set_lock_state", 00:06:41.031 "bdev_opal_delete", 00:06:41.031 "bdev_opal_get_info", 00:06:41.031 "bdev_opal_create", 00:06:41.031 "bdev_nvme_opal_revert", 00:06:41.031 "bdev_nvme_opal_init", 00:06:41.031 "bdev_nvme_send_cmd", 00:06:41.031 "bdev_nvme_set_keys", 00:06:41.031 "bdev_nvme_get_path_iostat", 00:06:41.031 "bdev_nvme_get_mdns_discovery_info", 00:06:41.031 "bdev_nvme_stop_mdns_discovery", 00:06:41.031 "bdev_nvme_start_mdns_discovery", 00:06:41.031 "bdev_nvme_set_multipath_policy", 00:06:41.031 "bdev_nvme_set_preferred_path", 00:06:41.031 "bdev_nvme_get_io_paths", 00:06:41.031 "bdev_nvme_remove_error_injection", 00:06:41.031 "bdev_nvme_add_error_injection", 00:06:41.031 "bdev_nvme_get_discovery_info", 00:06:41.031 "bdev_nvme_stop_discovery", 00:06:41.031 "bdev_nvme_start_discovery", 00:06:41.031 "bdev_nvme_get_controller_health_info", 00:06:41.031 "bdev_nvme_disable_controller", 00:06:41.031 "bdev_nvme_enable_controller", 00:06:41.031 "bdev_nvme_reset_controller", 00:06:41.031 "bdev_nvme_get_transport_statistics", 00:06:41.031 "bdev_nvme_apply_firmware", 00:06:41.031 "bdev_nvme_detach_controller", 00:06:41.031 "bdev_nvme_get_controllers", 00:06:41.031 "bdev_nvme_attach_controller", 00:06:41.031 "bdev_nvme_set_hotplug", 00:06:41.031 "bdev_nvme_set_options", 00:06:41.031 "bdev_passthru_delete", 00:06:41.031 "bdev_passthru_create", 00:06:41.031 "bdev_lvol_set_parent_bdev", 00:06:41.031 "bdev_lvol_set_parent", 00:06:41.031 "bdev_lvol_check_shallow_copy", 00:06:41.031 "bdev_lvol_start_shallow_copy", 00:06:41.031 "bdev_lvol_grow_lvstore", 00:06:41.031 "bdev_lvol_get_lvols", 00:06:41.031 "bdev_lvol_get_lvstores", 00:06:41.031 "bdev_lvol_delete", 00:06:41.031 "bdev_lvol_set_read_only", 00:06:41.031 "bdev_lvol_resize", 00:06:41.031 "bdev_lvol_decouple_parent", 00:06:41.031 "bdev_lvol_inflate", 00:06:41.031 "bdev_lvol_rename", 00:06:41.031 "bdev_lvol_clone_bdev", 00:06:41.031 "bdev_lvol_clone", 00:06:41.031 "bdev_lvol_snapshot", 00:06:41.031 "bdev_lvol_create", 00:06:41.031 "bdev_lvol_delete_lvstore", 00:06:41.031 "bdev_lvol_rename_lvstore", 00:06:41.031 "bdev_lvol_create_lvstore", 00:06:41.031 "bdev_raid_set_options", 00:06:41.031 "bdev_raid_remove_base_bdev", 00:06:41.031 "bdev_raid_add_base_bdev", 00:06:41.031 "bdev_raid_delete", 00:06:41.031 "bdev_raid_create", 00:06:41.031 "bdev_raid_get_bdevs", 00:06:41.031 "bdev_error_inject_error", 00:06:41.031 "bdev_error_delete", 00:06:41.031 "bdev_error_create", 00:06:41.031 "bdev_split_delete", 00:06:41.031 "bdev_split_create", 00:06:41.031 "bdev_delay_delete", 00:06:41.031 "bdev_delay_create", 00:06:41.031 "bdev_delay_update_latency", 00:06:41.031 "bdev_zone_block_delete", 00:06:41.031 "bdev_zone_block_create", 00:06:41.031 "blobfs_create", 00:06:41.031 "blobfs_detect", 00:06:41.031 "blobfs_set_cache_size", 00:06:41.031 "bdev_aio_delete", 00:06:41.031 "bdev_aio_rescan", 00:06:41.031 "bdev_aio_create", 00:06:41.031 "bdev_ftl_set_property", 00:06:41.031 "bdev_ftl_get_properties", 00:06:41.031 "bdev_ftl_get_stats", 00:06:41.031 "bdev_ftl_unmap", 00:06:41.031 "bdev_ftl_unload", 00:06:41.031 "bdev_ftl_delete", 00:06:41.031 "bdev_ftl_load", 00:06:41.031 "bdev_ftl_create", 00:06:41.031 "bdev_virtio_attach_controller", 00:06:41.031 "bdev_virtio_scsi_get_devices", 00:06:41.031 "bdev_virtio_detach_controller", 00:06:41.031 "bdev_virtio_blk_set_hotplug", 00:06:41.031 "bdev_iscsi_delete", 00:06:41.031 "bdev_iscsi_create", 00:06:41.031 "bdev_iscsi_set_options", 00:06:41.031 "accel_error_inject_error", 00:06:41.031 "ioat_scan_accel_module", 00:06:41.031 "dsa_scan_accel_module", 00:06:41.031 "iaa_scan_accel_module", 00:06:41.031 "keyring_file_remove_key", 00:06:41.031 "keyring_file_add_key", 00:06:41.031 "keyring_linux_set_options", 00:06:41.031 "fsdev_aio_delete", 00:06:41.031 "fsdev_aio_create", 00:06:41.031 "iscsi_get_histogram", 00:06:41.031 "iscsi_enable_histogram", 00:06:41.031 "iscsi_set_options", 00:06:41.031 "iscsi_get_auth_groups", 00:06:41.031 "iscsi_auth_group_remove_secret", 00:06:41.031 "iscsi_auth_group_add_secret", 00:06:41.031 "iscsi_delete_auth_group", 00:06:41.031 "iscsi_create_auth_group", 00:06:41.031 "iscsi_set_discovery_auth", 00:06:41.031 "iscsi_get_options", 00:06:41.031 "iscsi_target_node_request_logout", 00:06:41.031 "iscsi_target_node_set_redirect", 00:06:41.031 "iscsi_target_node_set_auth", 00:06:41.031 "iscsi_target_node_add_lun", 00:06:41.031 "iscsi_get_stats", 00:06:41.031 "iscsi_get_connections", 00:06:41.031 "iscsi_portal_group_set_auth", 00:06:41.031 "iscsi_start_portal_group", 00:06:41.031 "iscsi_delete_portal_group", 00:06:41.031 "iscsi_create_portal_group", 00:06:41.031 "iscsi_get_portal_groups", 00:06:41.031 "iscsi_delete_target_node", 00:06:41.032 "iscsi_target_node_remove_pg_ig_maps", 00:06:41.032 "iscsi_target_node_add_pg_ig_maps", 00:06:41.032 "iscsi_create_target_node", 00:06:41.032 "iscsi_get_target_nodes", 00:06:41.032 "iscsi_delete_initiator_group", 00:06:41.032 "iscsi_initiator_group_remove_initiators", 00:06:41.032 "iscsi_initiator_group_add_initiators", 00:06:41.032 "iscsi_create_initiator_group", 00:06:41.032 "iscsi_get_initiator_groups", 00:06:41.032 "nvmf_set_crdt", 00:06:41.032 "nvmf_set_config", 00:06:41.032 "nvmf_set_max_subsystems", 00:06:41.032 "nvmf_stop_mdns_prr", 00:06:41.032 "nvmf_publish_mdns_prr", 00:06:41.032 "nvmf_subsystem_get_listeners", 00:06:41.032 "nvmf_subsystem_get_qpairs", 00:06:41.032 "nvmf_subsystem_get_controllers", 00:06:41.032 "nvmf_get_stats", 00:06:41.032 "nvmf_get_transports", 00:06:41.032 "nvmf_create_transport", 00:06:41.032 "nvmf_get_targets", 00:06:41.032 "nvmf_delete_target", 00:06:41.032 "nvmf_create_target", 00:06:41.032 "nvmf_subsystem_allow_any_host", 00:06:41.032 "nvmf_subsystem_set_keys", 00:06:41.032 "nvmf_subsystem_remove_host", 00:06:41.032 "nvmf_subsystem_add_host", 00:06:41.032 "nvmf_ns_remove_host", 00:06:41.032 "nvmf_ns_add_host", 00:06:41.032 "nvmf_subsystem_remove_ns", 00:06:41.032 "nvmf_subsystem_set_ns_ana_group", 00:06:41.032 "nvmf_subsystem_add_ns", 00:06:41.032 "nvmf_subsystem_listener_set_ana_state", 00:06:41.032 "nvmf_discovery_get_referrals", 00:06:41.032 "nvmf_discovery_remove_referral", 00:06:41.032 "nvmf_discovery_add_referral", 00:06:41.032 "nvmf_subsystem_remove_listener", 00:06:41.032 "nvmf_subsystem_add_listener", 00:06:41.032 "nvmf_delete_subsystem", 00:06:41.032 "nvmf_create_subsystem", 00:06:41.032 "nvmf_get_subsystems", 00:06:41.032 "env_dpdk_get_mem_stats", 00:06:41.032 "nbd_get_disks", 00:06:41.032 "nbd_stop_disk", 00:06:41.032 "nbd_start_disk", 00:06:41.032 "ublk_recover_disk", 00:06:41.032 "ublk_get_disks", 00:06:41.032 "ublk_stop_disk", 00:06:41.032 "ublk_start_disk", 00:06:41.032 "ublk_destroy_target", 00:06:41.032 "ublk_create_target", 00:06:41.032 "virtio_blk_create_transport", 00:06:41.032 "virtio_blk_get_transports", 00:06:41.032 "vhost_controller_set_coalescing", 00:06:41.032 "vhost_get_controllers", 00:06:41.032 "vhost_delete_controller", 00:06:41.032 "vhost_create_blk_controller", 00:06:41.032 "vhost_scsi_controller_remove_target", 00:06:41.032 "vhost_scsi_controller_add_target", 00:06:41.032 "vhost_start_scsi_controller", 00:06:41.032 "vhost_create_scsi_controller", 00:06:41.032 "thread_set_cpumask", 00:06:41.032 "scheduler_set_options", 00:06:41.032 "framework_get_governor", 00:06:41.032 "framework_get_scheduler", 00:06:41.032 "framework_set_scheduler", 00:06:41.032 "framework_get_reactors", 00:06:41.032 "thread_get_io_channels", 00:06:41.032 "thread_get_pollers", 00:06:41.032 "thread_get_stats", 00:06:41.032 "framework_monitor_context_switch", 00:06:41.032 "spdk_kill_instance", 00:06:41.032 "log_enable_timestamps", 00:06:41.032 "log_get_flags", 00:06:41.032 "log_clear_flag", 00:06:41.032 "log_set_flag", 00:06:41.032 "log_get_level", 00:06:41.032 "log_set_level", 00:06:41.032 "log_get_print_level", 00:06:41.032 "log_set_print_level", 00:06:41.032 "framework_enable_cpumask_locks", 00:06:41.032 "framework_disable_cpumask_locks", 00:06:41.032 "framework_wait_init", 00:06:41.032 "framework_start_init", 00:06:41.032 "scsi_get_devices", 00:06:41.032 "bdev_get_histogram", 00:06:41.032 "bdev_enable_histogram", 00:06:41.032 "bdev_set_qos_limit", 00:06:41.032 "bdev_set_qd_sampling_period", 00:06:41.032 "bdev_get_bdevs", 00:06:41.032 "bdev_reset_iostat", 00:06:41.032 "bdev_get_iostat", 00:06:41.032 "bdev_examine", 00:06:41.032 "bdev_wait_for_examine", 00:06:41.032 "bdev_set_options", 00:06:41.032 "accel_get_stats", 00:06:41.032 "accel_set_options", 00:06:41.032 "accel_set_driver", 00:06:41.032 "accel_crypto_key_destroy", 00:06:41.032 "accel_crypto_keys_get", 00:06:41.032 "accel_crypto_key_create", 00:06:41.032 "accel_assign_opc", 00:06:41.032 "accel_get_module_info", 00:06:41.032 "accel_get_opc_assignments", 00:06:41.032 "vmd_rescan", 00:06:41.032 "vmd_remove_device", 00:06:41.032 "vmd_enable", 00:06:41.032 "sock_get_default_impl", 00:06:41.032 "sock_set_default_impl", 00:06:41.032 "sock_impl_set_options", 00:06:41.032 "sock_impl_get_options", 00:06:41.032 "iobuf_get_stats", 00:06:41.032 "iobuf_set_options", 00:06:41.032 "keyring_get_keys", 00:06:41.032 "framework_get_pci_devices", 00:06:41.032 "framework_get_config", 00:06:41.032 "framework_get_subsystems", 00:06:41.032 "fsdev_set_opts", 00:06:41.032 "fsdev_get_opts", 00:06:41.032 "trace_get_info", 00:06:41.032 "trace_get_tpoint_group_mask", 00:06:41.032 "trace_disable_tpoint_group", 00:06:41.032 "trace_enable_tpoint_group", 00:06:41.032 "trace_clear_tpoint_mask", 00:06:41.032 "trace_set_tpoint_mask", 00:06:41.032 "notify_get_notifications", 00:06:41.032 "notify_get_types", 00:06:41.032 "spdk_get_version", 00:06:41.032 "rpc_get_methods" 00:06:41.032 ] 00:06:41.032 03:57:34 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:41.032 03:57:34 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:41.032 03:57:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.290 03:57:34 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:41.290 03:57:34 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57986 00:06:41.290 03:57:34 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57986 ']' 00:06:41.290 03:57:34 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57986 00:06:41.290 03:57:34 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:41.290 03:57:34 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.290 03:57:34 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57986 00:06:41.290 03:57:34 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.290 03:57:34 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.290 killing process with pid 57986 00:06:41.290 03:57:34 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57986' 00:06:41.290 03:57:34 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57986 00:06:41.290 03:57:34 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57986 00:06:43.822 00:06:43.822 real 0m4.219s 00:06:43.822 user 0m7.587s 00:06:43.822 sys 0m0.591s 00:06:43.822 03:57:36 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.822 03:57:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:43.822 ************************************ 00:06:43.822 END TEST spdkcli_tcp 00:06:43.823 ************************************ 00:06:43.823 03:57:36 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:43.823 03:57:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.823 03:57:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.823 03:57:36 -- common/autotest_common.sh@10 -- # set +x 00:06:43.823 ************************************ 00:06:43.823 START TEST dpdk_mem_utility 00:06:43.823 ************************************ 00:06:43.823 03:57:36 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:43.823 * Looking for test storage... 00:06:43.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:43.823 03:57:37 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:43.823 03:57:37 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:43.823 03:57:37 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:43.823 03:57:37 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.823 03:57:37 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:43.823 03:57:37 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.823 03:57:37 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:43.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.823 --rc genhtml_branch_coverage=1 00:06:43.823 --rc genhtml_function_coverage=1 00:06:43.823 --rc genhtml_legend=1 00:06:43.823 --rc geninfo_all_blocks=1 00:06:43.823 --rc geninfo_unexecuted_blocks=1 00:06:43.823 00:06:43.823 ' 00:06:43.823 03:57:37 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:43.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.823 --rc genhtml_branch_coverage=1 00:06:43.823 --rc genhtml_function_coverage=1 00:06:43.823 --rc genhtml_legend=1 00:06:43.823 --rc geninfo_all_blocks=1 00:06:43.823 --rc geninfo_unexecuted_blocks=1 00:06:43.823 00:06:43.823 ' 00:06:43.823 03:57:37 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:43.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.823 --rc genhtml_branch_coverage=1 00:06:43.823 --rc genhtml_function_coverage=1 00:06:43.823 --rc genhtml_legend=1 00:06:43.823 --rc geninfo_all_blocks=1 00:06:43.823 --rc geninfo_unexecuted_blocks=1 00:06:43.823 00:06:43.823 ' 00:06:43.823 03:57:37 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:43.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.823 --rc genhtml_branch_coverage=1 00:06:43.823 --rc genhtml_function_coverage=1 00:06:43.823 --rc genhtml_legend=1 00:06:43.823 --rc geninfo_all_blocks=1 00:06:43.823 --rc geninfo_unexecuted_blocks=1 00:06:43.823 00:06:43.823 ' 00:06:43.823 03:57:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:43.823 03:57:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58108 00:06:43.823 03:57:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:43.823 03:57:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58108 00:06:43.823 03:57:37 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58108 ']' 00:06:43.823 03:57:37 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.823 03:57:37 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.823 03:57:37 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.823 03:57:37 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.823 03:57:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:44.082 [2024-12-06 03:57:37.238225] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:06:44.082 [2024-12-06 03:57:37.238346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58108 ] 00:06:44.082 [2024-12-06 03:57:37.414691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.341 [2024-12-06 03:57:37.537686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.312 03:57:38 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.312 03:57:38 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:45.312 03:57:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:45.312 03:57:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:45.312 03:57:38 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.312 03:57:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:45.312 { 00:06:45.312 "filename": "/tmp/spdk_mem_dump.txt" 00:06:45.312 } 00:06:45.312 03:57:38 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.312 03:57:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:45.312 DPDK memory size 824.000000 MiB in 1 heap(s) 00:06:45.312 1 heaps totaling size 824.000000 MiB 00:06:45.312 size: 824.000000 MiB heap id: 0 00:06:45.312 end heaps---------- 00:06:45.312 9 mempools totaling size 603.782043 MiB 00:06:45.312 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:45.312 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:45.312 size: 100.555481 MiB name: bdev_io_58108 00:06:45.312 size: 50.003479 MiB name: msgpool_58108 00:06:45.312 size: 36.509338 MiB name: fsdev_io_58108 00:06:45.312 size: 21.763794 MiB name: PDU_Pool 00:06:45.312 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:45.312 size: 4.133484 MiB name: evtpool_58108 00:06:45.312 size: 0.026123 MiB name: Session_Pool 00:06:45.312 end mempools------- 00:06:45.312 6 memzones totaling size 4.142822 MiB 00:06:45.312 size: 1.000366 MiB name: RG_ring_0_58108 00:06:45.312 size: 1.000366 MiB name: RG_ring_1_58108 00:06:45.312 size: 1.000366 MiB name: RG_ring_4_58108 00:06:45.312 size: 1.000366 MiB name: RG_ring_5_58108 00:06:45.312 size: 0.125366 MiB name: RG_ring_2_58108 00:06:45.312 size: 0.015991 MiB name: RG_ring_3_58108 00:06:45.312 end memzones------- 00:06:45.312 03:57:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:45.312 heap id: 0 total size: 824.000000 MiB number of busy elements: 323 number of free elements: 18 00:06:45.312 list of free elements. size: 16.779419 MiB 00:06:45.312 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:45.312 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:45.312 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:45.312 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:45.312 element at address: 0x200019900040 with size: 0.999939 MiB 00:06:45.312 element at address: 0x200019a00000 with size: 0.999084 MiB 00:06:45.312 element at address: 0x200032600000 with size: 0.994324 MiB 00:06:45.312 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:45.313 element at address: 0x200019200000 with size: 0.959656 MiB 00:06:45.313 element at address: 0x200019d00040 with size: 0.936401 MiB 00:06:45.313 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:45.313 element at address: 0x20001b400000 with size: 0.560974 MiB 00:06:45.313 element at address: 0x200000c00000 with size: 0.489197 MiB 00:06:45.313 element at address: 0x200019600000 with size: 0.487976 MiB 00:06:45.313 element at address: 0x200019e00000 with size: 0.485413 MiB 00:06:45.313 element at address: 0x200012c00000 with size: 0.433228 MiB 00:06:45.313 element at address: 0x200028800000 with size: 0.390442 MiB 00:06:45.313 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:45.313 list of standard malloc elements. size: 199.289673 MiB 00:06:45.313 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:45.313 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:45.313 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:45.313 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:45.313 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:06:45.313 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:45.313 element at address: 0x200019deff40 with size: 0.062683 MiB 00:06:45.313 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:45.313 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:45.313 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:06:45.313 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:45.313 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:06:45.313 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:06:45.313 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:06:45.314 element at address: 0x200019affc40 with size: 0.000244 MiB 00:06:45.314 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:06:45.314 element at address: 0x200028863f40 with size: 0.000244 MiB 00:06:45.314 element at address: 0x200028864040 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886af80 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886b080 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886b180 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886b280 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886b380 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886b480 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886b580 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886b680 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886b780 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886b880 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886b980 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886be80 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886c080 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886c180 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886c280 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886c380 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886c480 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886c580 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886c680 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886c780 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886c880 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886c980 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886d080 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886d180 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886d280 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886d380 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886d480 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886d580 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886d680 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886d780 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886d880 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886d980 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886da80 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886db80 with size: 0.000244 MiB 00:06:45.314 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886de80 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886df80 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886e080 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886e180 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886e280 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886e380 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886e480 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886e580 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886e680 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886e780 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886e880 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886e980 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886f080 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886f180 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886f280 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886f380 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886f480 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886f580 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886f680 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886f780 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886f880 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886f980 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:06:45.315 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:06:45.315 list of memzone associated elements. size: 607.930908 MiB 00:06:45.315 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:06:45.315 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:45.315 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:06:45.315 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:45.315 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:06:45.315 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58108_0 00:06:45.315 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:45.315 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58108_0 00:06:45.315 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:45.315 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58108_0 00:06:45.315 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:06:45.315 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:45.315 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:06:45.315 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:45.315 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:45.315 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58108_0 00:06:45.315 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:45.315 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58108 00:06:45.315 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:45.315 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58108 00:06:45.315 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:06:45.315 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:45.315 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:06:45.315 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:45.315 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:45.315 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:45.315 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:06:45.315 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:45.315 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:45.315 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58108 00:06:45.315 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:45.315 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58108 00:06:45.315 element at address: 0x200019affd40 with size: 1.000549 MiB 00:06:45.315 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58108 00:06:45.315 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:06:45.315 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58108 00:06:45.315 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:45.315 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58108 00:06:45.315 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:45.315 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58108 00:06:45.315 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:06:45.315 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:45.315 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:06:45.315 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:45.315 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:06:45.315 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:45.315 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:45.315 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58108 00:06:45.315 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:45.315 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58108 00:06:45.315 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:06:45.315 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:45.315 element at address: 0x200028864140 with size: 0.023804 MiB 00:06:45.315 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:45.315 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:45.315 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58108 00:06:45.315 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:06:45.315 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:45.315 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:45.315 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58108 00:06:45.315 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:45.315 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58108 00:06:45.315 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:45.315 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58108 00:06:45.315 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:06:45.315 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:45.315 03:57:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:45.315 03:57:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58108 00:06:45.315 03:57:38 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58108 ']' 00:06:45.315 03:57:38 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58108 00:06:45.315 03:57:38 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:45.315 03:57:38 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.315 03:57:38 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58108 00:06:45.315 killing process with pid 58108 00:06:45.315 03:57:38 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.315 03:57:38 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.315 03:57:38 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58108' 00:06:45.315 03:57:38 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58108 00:06:45.315 03:57:38 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58108 00:06:47.854 ************************************ 00:06:47.854 END TEST dpdk_mem_utility 00:06:47.854 ************************************ 00:06:47.854 00:06:47.854 real 0m4.076s 00:06:47.854 user 0m3.969s 00:06:47.854 sys 0m0.595s 00:06:47.854 03:57:40 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.854 03:57:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:47.854 03:57:41 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:47.854 03:57:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.854 03:57:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.854 03:57:41 -- common/autotest_common.sh@10 -- # set +x 00:06:47.854 ************************************ 00:06:47.854 START TEST event 00:06:47.854 ************************************ 00:06:47.854 03:57:41 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:47.854 * Looking for test storage... 00:06:47.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:47.854 03:57:41 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:47.854 03:57:41 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:47.854 03:57:41 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:48.114 03:57:41 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:48.114 03:57:41 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.114 03:57:41 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.114 03:57:41 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.114 03:57:41 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.114 03:57:41 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.114 03:57:41 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.114 03:57:41 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.114 03:57:41 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.114 03:57:41 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.114 03:57:41 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.114 03:57:41 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.114 03:57:41 event -- scripts/common.sh@344 -- # case "$op" in 00:06:48.114 03:57:41 event -- scripts/common.sh@345 -- # : 1 00:06:48.114 03:57:41 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.114 03:57:41 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.114 03:57:41 event -- scripts/common.sh@365 -- # decimal 1 00:06:48.114 03:57:41 event -- scripts/common.sh@353 -- # local d=1 00:06:48.114 03:57:41 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.114 03:57:41 event -- scripts/common.sh@355 -- # echo 1 00:06:48.114 03:57:41 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.114 03:57:41 event -- scripts/common.sh@366 -- # decimal 2 00:06:48.114 03:57:41 event -- scripts/common.sh@353 -- # local d=2 00:06:48.114 03:57:41 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.114 03:57:41 event -- scripts/common.sh@355 -- # echo 2 00:06:48.114 03:57:41 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.114 03:57:41 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.114 03:57:41 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.114 03:57:41 event -- scripts/common.sh@368 -- # return 0 00:06:48.114 03:57:41 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.114 03:57:41 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:48.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.114 --rc genhtml_branch_coverage=1 00:06:48.114 --rc genhtml_function_coverage=1 00:06:48.114 --rc genhtml_legend=1 00:06:48.114 --rc geninfo_all_blocks=1 00:06:48.114 --rc geninfo_unexecuted_blocks=1 00:06:48.114 00:06:48.114 ' 00:06:48.114 03:57:41 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:48.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.114 --rc genhtml_branch_coverage=1 00:06:48.114 --rc genhtml_function_coverage=1 00:06:48.114 --rc genhtml_legend=1 00:06:48.114 --rc geninfo_all_blocks=1 00:06:48.114 --rc geninfo_unexecuted_blocks=1 00:06:48.114 00:06:48.114 ' 00:06:48.114 03:57:41 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:48.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.114 --rc genhtml_branch_coverage=1 00:06:48.114 --rc genhtml_function_coverage=1 00:06:48.114 --rc genhtml_legend=1 00:06:48.114 --rc geninfo_all_blocks=1 00:06:48.114 --rc geninfo_unexecuted_blocks=1 00:06:48.114 00:06:48.114 ' 00:06:48.114 03:57:41 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:48.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.114 --rc genhtml_branch_coverage=1 00:06:48.114 --rc genhtml_function_coverage=1 00:06:48.114 --rc genhtml_legend=1 00:06:48.114 --rc geninfo_all_blocks=1 00:06:48.114 --rc geninfo_unexecuted_blocks=1 00:06:48.114 00:06:48.114 ' 00:06:48.114 03:57:41 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:48.114 03:57:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:48.114 03:57:41 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:48.114 03:57:41 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:48.114 03:57:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.114 03:57:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.114 ************************************ 00:06:48.114 START TEST event_perf 00:06:48.114 ************************************ 00:06:48.114 03:57:41 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:48.115 Running I/O for 1 seconds...[2024-12-06 03:57:41.332479] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:06:48.115 [2024-12-06 03:57:41.332617] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58216 ] 00:06:48.374 [2024-12-06 03:57:41.506043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.374 [2024-12-06 03:57:41.627342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.374 [2024-12-06 03:57:41.627360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.374 [2024-12-06 03:57:41.627535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.374 Running I/O for 1 seconds...[2024-12-06 03:57:41.627572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:49.755 00:06:49.755 lcore 0: 192076 00:06:49.755 lcore 1: 192075 00:06:49.755 lcore 2: 192076 00:06:49.755 lcore 3: 192076 00:06:49.755 done. 00:06:49.755 00:06:49.755 real 0m1.594s 00:06:49.755 user 0m4.365s 00:06:49.755 sys 0m0.106s 00:06:49.755 03:57:42 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.755 03:57:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:49.755 ************************************ 00:06:49.755 END TEST event_perf 00:06:49.755 ************************************ 00:06:49.755 03:57:42 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:49.755 03:57:42 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:49.755 03:57:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.755 03:57:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:49.755 ************************************ 00:06:49.755 START TEST event_reactor 00:06:49.755 ************************************ 00:06:49.755 03:57:42 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:49.755 [2024-12-06 03:57:42.989608] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:06:49.755 [2024-12-06 03:57:42.990148] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58256 ] 00:06:50.022 [2024-12-06 03:57:43.164692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.022 [2024-12-06 03:57:43.286488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.418 test_start 00:06:51.418 oneshot 00:06:51.418 tick 100 00:06:51.418 tick 100 00:06:51.418 tick 250 00:06:51.418 tick 100 00:06:51.418 tick 100 00:06:51.418 tick 250 00:06:51.418 tick 100 00:06:51.418 tick 500 00:06:51.418 tick 100 00:06:51.418 tick 100 00:06:51.418 tick 250 00:06:51.418 tick 100 00:06:51.418 tick 100 00:06:51.418 test_end 00:06:51.418 00:06:51.418 real 0m1.577s 00:06:51.418 user 0m1.365s 00:06:51.418 sys 0m0.102s 00:06:51.418 03:57:44 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.418 03:57:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:51.418 ************************************ 00:06:51.418 END TEST event_reactor 00:06:51.418 ************************************ 00:06:51.418 03:57:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:51.418 03:57:44 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:51.418 03:57:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.418 03:57:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.418 ************************************ 00:06:51.418 START TEST event_reactor_perf 00:06:51.418 ************************************ 00:06:51.418 03:57:44 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:51.418 [2024-12-06 03:57:44.625992] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:06:51.418 [2024-12-06 03:57:44.626214] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58292 ] 00:06:51.677 [2024-12-06 03:57:44.801290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.677 [2024-12-06 03:57:44.921030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.056 test_start 00:06:53.056 test_end 00:06:53.056 Performance: 368345 events per second 00:06:53.056 00:06:53.056 real 0m1.567s 00:06:53.056 user 0m1.354s 00:06:53.056 sys 0m0.105s 00:06:53.056 03:57:46 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.056 03:57:46 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.056 ************************************ 00:06:53.056 END TEST event_reactor_perf 00:06:53.056 ************************************ 00:06:53.056 03:57:46 event -- event/event.sh@49 -- # uname -s 00:06:53.056 03:57:46 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:53.056 03:57:46 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:53.056 03:57:46 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.056 03:57:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.056 03:57:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.056 ************************************ 00:06:53.056 START TEST event_scheduler 00:06:53.056 ************************************ 00:06:53.056 03:57:46 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:53.056 * Looking for test storage... 00:06:53.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:53.056 03:57:46 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:53.056 03:57:46 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:53.056 03:57:46 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:53.316 03:57:46 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.316 03:57:46 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:53.316 03:57:46 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.316 03:57:46 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:53.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.316 --rc genhtml_branch_coverage=1 00:06:53.316 --rc genhtml_function_coverage=1 00:06:53.316 --rc genhtml_legend=1 00:06:53.316 --rc geninfo_all_blocks=1 00:06:53.316 --rc geninfo_unexecuted_blocks=1 00:06:53.316 00:06:53.316 ' 00:06:53.316 03:57:46 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:53.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.316 --rc genhtml_branch_coverage=1 00:06:53.316 --rc genhtml_function_coverage=1 00:06:53.316 --rc genhtml_legend=1 00:06:53.316 --rc geninfo_all_blocks=1 00:06:53.316 --rc geninfo_unexecuted_blocks=1 00:06:53.316 00:06:53.316 ' 00:06:53.316 03:57:46 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:53.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.316 --rc genhtml_branch_coverage=1 00:06:53.316 --rc genhtml_function_coverage=1 00:06:53.316 --rc genhtml_legend=1 00:06:53.316 --rc geninfo_all_blocks=1 00:06:53.316 --rc geninfo_unexecuted_blocks=1 00:06:53.316 00:06:53.316 ' 00:06:53.316 03:57:46 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:53.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.316 --rc genhtml_branch_coverage=1 00:06:53.316 --rc genhtml_function_coverage=1 00:06:53.316 --rc genhtml_legend=1 00:06:53.316 --rc geninfo_all_blocks=1 00:06:53.316 --rc geninfo_unexecuted_blocks=1 00:06:53.316 00:06:53.316 ' 00:06:53.316 03:57:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:53.316 03:57:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58368 00:06:53.316 03:57:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:53.316 03:57:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:53.316 03:57:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58368 00:06:53.316 03:57:46 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58368 ']' 00:06:53.316 03:57:46 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.316 03:57:46 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.316 03:57:46 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.316 03:57:46 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.316 03:57:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:53.316 [2024-12-06 03:57:46.554946] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:06:53.316 [2024-12-06 03:57:46.555179] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58368 ] 00:06:53.576 [2024-12-06 03:57:46.731860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:53.576 [2024-12-06 03:57:46.856175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.576 [2024-12-06 03:57:46.856430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.576 [2024-12-06 03:57:46.856344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.576 [2024-12-06 03:57:46.856466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.144 03:57:47 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.144 03:57:47 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:54.144 03:57:47 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:54.144 03:57:47 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.144 03:57:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:54.144 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:54.144 POWER: Cannot set governor of lcore 0 to userspace 00:06:54.144 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:54.144 POWER: Cannot set governor of lcore 0 to performance 00:06:54.144 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:54.144 POWER: Cannot set governor of lcore 0 to userspace 00:06:54.144 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:54.144 POWER: Cannot set governor of lcore 0 to userspace 00:06:54.144 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:54.144 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:54.144 POWER: Unable to set Power Management Environment for lcore 0 00:06:54.144 [2024-12-06 03:57:47.393571] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:54.144 [2024-12-06 03:57:47.393595] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:54.144 [2024-12-06 03:57:47.393607] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:54.144 [2024-12-06 03:57:47.393634] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:54.144 [2024-12-06 03:57:47.393643] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:54.144 [2024-12-06 03:57:47.393654] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:54.144 03:57:47 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.144 03:57:47 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:54.144 03:57:47 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.144 03:57:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:54.404 [2024-12-06 03:57:47.737989] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:54.404 03:57:47 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.404 03:57:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:54.404 03:57:47 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.404 03:57:47 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.404 03:57:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:54.404 ************************************ 00:06:54.404 START TEST scheduler_create_thread 00:06:54.404 ************************************ 00:06:54.404 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:54.404 03:57:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:54.404 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.404 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.664 2 00:06:54.664 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.664 03:57:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:54.664 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.664 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.664 3 00:06:54.664 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.664 03:57:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:54.664 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.664 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.664 4 00:06:54.664 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.664 03:57:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:54.664 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.664 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.664 5 00:06:54.664 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.664 03:57:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:54.664 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.664 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.664 6 00:06:54.664 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.664 03:57:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:54.664 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.664 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.664 7 00:06:54.664 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.664 03:57:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:54.665 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.665 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.665 8 00:06:54.665 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.665 03:57:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:54.665 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.665 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.665 9 00:06:54.665 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.665 03:57:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:54.665 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.665 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.665 10 00:06:54.665 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.665 03:57:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:54.665 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.665 03:57:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.049 03:57:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.049 03:57:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:56.049 03:57:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:56.049 03:57:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.049 03:57:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.988 03:57:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.988 03:57:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:56.988 03:57:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.988 03:57:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.557 03:57:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.557 03:57:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:57.557 03:57:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:57.557 03:57:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.557 03:57:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.503 ************************************ 00:06:58.503 END TEST scheduler_create_thread 00:06:58.503 ************************************ 00:06:58.503 03:57:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.503 00:06:58.503 real 0m3.886s 00:06:58.503 user 0m0.027s 00:06:58.503 sys 0m0.005s 00:06:58.503 03:57:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.503 03:57:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.503 03:57:51 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:58.503 03:57:51 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58368 00:06:58.503 03:57:51 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58368 ']' 00:06:58.503 03:57:51 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58368 00:06:58.503 03:57:51 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:58.503 03:57:51 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.503 03:57:51 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58368 00:06:58.503 killing process with pid 58368 00:06:58.503 03:57:51 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:58.503 03:57:51 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:58.504 03:57:51 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58368' 00:06:58.504 03:57:51 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58368 00:06:58.504 03:57:51 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58368 00:06:58.762 [2024-12-06 03:57:52.015955] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:00.137 00:07:00.137 real 0m6.990s 00:07:00.137 user 0m15.147s 00:07:00.137 sys 0m0.472s 00:07:00.137 ************************************ 00:07:00.137 END TEST event_scheduler 00:07:00.137 ************************************ 00:07:00.137 03:57:53 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.137 03:57:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:00.137 03:57:53 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:00.137 03:57:53 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:00.137 03:57:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.137 03:57:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.137 03:57:53 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.137 ************************************ 00:07:00.137 START TEST app_repeat 00:07:00.137 ************************************ 00:07:00.137 03:57:53 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:00.137 03:57:53 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.137 03:57:53 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.137 03:57:53 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:00.137 03:57:53 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:00.137 03:57:53 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:00.137 03:57:53 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:00.137 03:57:53 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:00.137 03:57:53 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58492 00:07:00.137 03:57:53 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:00.137 03:57:53 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:00.137 03:57:53 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58492' 00:07:00.137 Process app_repeat pid: 58492 00:07:00.137 03:57:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:00.137 03:57:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:00.137 spdk_app_start Round 0 00:07:00.137 03:57:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58492 /var/tmp/spdk-nbd.sock 00:07:00.137 03:57:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58492 ']' 00:07:00.137 03:57:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:00.137 03:57:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.137 03:57:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:00.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:00.137 03:57:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.137 03:57:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:00.137 [2024-12-06 03:57:53.344457] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:07:00.137 [2024-12-06 03:57:53.344687] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58492 ] 00:07:00.410 [2024-12-06 03:57:53.506156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:00.410 [2024-12-06 03:57:53.624992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.410 [2024-12-06 03:57:53.625022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.980 03:57:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.980 03:57:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:00.980 03:57:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.238 Malloc0 00:07:01.238 03:57:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.497 Malloc1 00:07:01.497 03:57:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.497 03:57:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.497 03:57:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.497 03:57:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:01.497 03:57:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.497 03:57:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:01.497 03:57:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.497 03:57:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.497 03:57:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.497 03:57:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:01.497 03:57:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.497 03:57:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:01.497 03:57:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:01.497 03:57:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:01.497 03:57:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.497 03:57:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:01.756 /dev/nbd0 00:07:01.756 03:57:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:01.756 03:57:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:01.756 03:57:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:01.756 03:57:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:01.756 03:57:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.756 03:57:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.756 03:57:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:01.756 03:57:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:01.756 03:57:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.756 03:57:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.756 03:57:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:01.756 1+0 records in 00:07:01.756 1+0 records out 00:07:01.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341955 s, 12.0 MB/s 00:07:01.756 03:57:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.756 03:57:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:01.756 03:57:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.756 03:57:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.756 03:57:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:01.756 03:57:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.756 03:57:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.756 03:57:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:02.014 /dev/nbd1 00:07:02.014 03:57:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:02.014 03:57:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:02.014 03:57:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:02.014 03:57:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:02.014 03:57:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.014 03:57:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.014 03:57:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:02.014 03:57:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:02.014 03:57:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.014 03:57:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.014 03:57:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:02.014 1+0 records in 00:07:02.014 1+0 records out 00:07:02.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239141 s, 17.1 MB/s 00:07:02.014 03:57:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:02.014 03:57:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:02.014 03:57:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:02.014 03:57:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.014 03:57:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:02.014 03:57:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.014 03:57:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.014 03:57:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.014 03:57:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.014 03:57:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.272 03:57:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:02.272 { 00:07:02.272 "nbd_device": "/dev/nbd0", 00:07:02.272 "bdev_name": "Malloc0" 00:07:02.272 }, 00:07:02.272 { 00:07:02.272 "nbd_device": "/dev/nbd1", 00:07:02.272 "bdev_name": "Malloc1" 00:07:02.272 } 00:07:02.272 ]' 00:07:02.272 03:57:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:02.272 { 00:07:02.272 "nbd_device": "/dev/nbd0", 00:07:02.272 "bdev_name": "Malloc0" 00:07:02.272 }, 00:07:02.272 { 00:07:02.272 "nbd_device": "/dev/nbd1", 00:07:02.272 "bdev_name": "Malloc1" 00:07:02.272 } 00:07:02.272 ]' 00:07:02.272 03:57:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.272 03:57:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:02.272 /dev/nbd1' 00:07:02.273 03:57:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:02.273 /dev/nbd1' 00:07:02.273 03:57:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.273 03:57:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:02.273 03:57:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:02.273 03:57:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:02.273 03:57:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:02.273 03:57:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:02.273 03:57:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.273 03:57:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.273 03:57:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:02.273 03:57:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:02.273 03:57:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:02.273 03:57:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:02.532 256+0 records in 00:07:02.532 256+0 records out 00:07:02.532 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133853 s, 78.3 MB/s 00:07:02.532 03:57:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.532 03:57:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:02.532 256+0 records in 00:07:02.532 256+0 records out 00:07:02.532 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205672 s, 51.0 MB/s 00:07:02.532 03:57:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.532 03:57:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:02.532 256+0 records in 00:07:02.532 256+0 records out 00:07:02.532 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248688 s, 42.2 MB/s 00:07:02.532 03:57:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:02.532 03:57:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.532 03:57:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.532 03:57:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:02.532 03:57:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:02.532 03:57:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:02.532 03:57:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:02.532 03:57:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.532 03:57:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:02.532 03:57:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.532 03:57:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:02.532 03:57:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:02.532 03:57:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:02.532 03:57:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.532 03:57:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.532 03:57:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.532 03:57:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:02.532 03:57:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.532 03:57:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:02.791 03:57:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.791 03:57:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.791 03:57:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.791 03:57:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.791 03:57:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.791 03:57:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.791 03:57:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:02.791 03:57:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.791 03:57:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.791 03:57:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:03.051 03:57:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:03.051 03:57:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:03.051 03:57:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:03.051 03:57:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.051 03:57:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.051 03:57:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:03.051 03:57:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:03.051 03:57:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.051 03:57:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.051 03:57:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.051 03:57:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.312 03:57:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.312 03:57:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.312 03:57:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.312 03:57:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.312 03:57:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.312 03:57:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.312 03:57:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:03.312 03:57:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.312 03:57:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.312 03:57:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:03.312 03:57:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:03.312 03:57:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:03.312 03:57:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:03.572 03:57:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:04.952 [2024-12-06 03:57:58.070694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:04.952 [2024-12-06 03:57:58.187673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.952 [2024-12-06 03:57:58.187682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.211 [2024-12-06 03:57:58.381480] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:05.211 [2024-12-06 03:57:58.381600] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:06.673 spdk_app_start Round 1 00:07:06.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:06.673 03:57:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:06.673 03:57:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:06.673 03:57:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58492 /var/tmp/spdk-nbd.sock 00:07:06.673 03:57:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58492 ']' 00:07:06.673 03:57:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:06.673 03:57:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.673 03:57:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:06.673 03:57:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.673 03:57:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:06.933 03:58:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.933 03:58:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:06.933 03:58:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:07.193 Malloc0 00:07:07.193 03:58:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:07.452 Malloc1 00:07:07.452 03:58:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:07.452 03:58:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.452 03:58:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:07.452 03:58:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:07.452 03:58:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.452 03:58:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:07.452 03:58:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:07.452 03:58:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.452 03:58:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:07.452 03:58:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:07.452 03:58:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.452 03:58:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:07.452 03:58:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:07.452 03:58:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:07.452 03:58:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.452 03:58:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:07.712 /dev/nbd0 00:07:07.712 03:58:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:07.712 03:58:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:07.712 03:58:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:07.712 03:58:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:07.712 03:58:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.712 03:58:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.712 03:58:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:07.712 03:58:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:07.712 03:58:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.712 03:58:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.712 03:58:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:07.712 1+0 records in 00:07:07.712 1+0 records out 00:07:07.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360258 s, 11.4 MB/s 00:07:07.712 03:58:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:07.712 03:58:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:07.712 03:58:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:07.712 03:58:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.712 03:58:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:07.712 03:58:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.712 03:58:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.712 03:58:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:07.972 /dev/nbd1 00:07:07.972 03:58:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:07.972 03:58:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:07.972 03:58:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:07.972 03:58:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:07.972 03:58:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.972 03:58:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.972 03:58:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:07.972 03:58:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:07.972 03:58:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.972 03:58:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.972 03:58:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:07.972 1+0 records in 00:07:07.972 1+0 records out 00:07:07.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494274 s, 8.3 MB/s 00:07:07.972 03:58:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:07.972 03:58:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:07.972 03:58:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:07.972 03:58:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.972 03:58:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:07.972 03:58:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.972 03:58:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.972 03:58:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.972 03:58:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.972 03:58:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:08.234 { 00:07:08.234 "nbd_device": "/dev/nbd0", 00:07:08.234 "bdev_name": "Malloc0" 00:07:08.234 }, 00:07:08.234 { 00:07:08.234 "nbd_device": "/dev/nbd1", 00:07:08.234 "bdev_name": "Malloc1" 00:07:08.234 } 00:07:08.234 ]' 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:08.234 { 00:07:08.234 "nbd_device": "/dev/nbd0", 00:07:08.234 "bdev_name": "Malloc0" 00:07:08.234 }, 00:07:08.234 { 00:07:08.234 "nbd_device": "/dev/nbd1", 00:07:08.234 "bdev_name": "Malloc1" 00:07:08.234 } 00:07:08.234 ]' 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:08.234 /dev/nbd1' 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:08.234 /dev/nbd1' 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:08.234 256+0 records in 00:07:08.234 256+0 records out 00:07:08.234 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108289 s, 96.8 MB/s 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:08.234 256+0 records in 00:07:08.234 256+0 records out 00:07:08.234 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229258 s, 45.7 MB/s 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:08.234 256+0 records in 00:07:08.234 256+0 records out 00:07:08.234 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249538 s, 42.0 MB/s 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.234 03:58:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:08.493 03:58:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:08.493 03:58:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:08.493 03:58:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:08.493 03:58:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.493 03:58:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.493 03:58:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:08.493 03:58:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:08.493 03:58:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.493 03:58:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.493 03:58:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:08.753 03:58:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:08.753 03:58:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:08.753 03:58:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:08.753 03:58:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.753 03:58:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.753 03:58:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:08.753 03:58:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:08.753 03:58:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.753 03:58:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.753 03:58:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.753 03:58:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.012 03:58:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:09.012 03:58:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:09.012 03:58:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.012 03:58:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:09.012 03:58:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.012 03:58:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:09.012 03:58:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:09.012 03:58:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:09.012 03:58:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:09.012 03:58:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:09.012 03:58:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:09.012 03:58:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:09.012 03:58:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:09.582 03:58:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:10.963 [2024-12-06 03:58:03.925586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:10.963 [2024-12-06 03:58:04.039510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.963 [2024-12-06 03:58:04.039534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.963 [2024-12-06 03:58:04.235340] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:10.963 [2024-12-06 03:58:04.235439] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:12.871 spdk_app_start Round 2 00:07:12.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:12.871 03:58:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:12.871 03:58:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:12.871 03:58:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58492 /var/tmp/spdk-nbd.sock 00:07:12.871 03:58:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58492 ']' 00:07:12.871 03:58:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:12.871 03:58:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.871 03:58:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:12.871 03:58:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.871 03:58:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:12.871 03:58:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.871 03:58:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:12.871 03:58:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:12.871 Malloc0 00:07:13.131 03:58:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:13.131 Malloc1 00:07:13.391 03:58:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:13.391 03:58:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.391 03:58:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:13.391 03:58:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:13.391 03:58:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.391 03:58:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:13.391 03:58:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:13.391 03:58:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.391 03:58:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:13.391 03:58:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:13.391 03:58:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.391 03:58:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:13.391 03:58:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:13.391 03:58:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:13.391 03:58:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.391 03:58:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:13.391 /dev/nbd0 00:07:13.391 03:58:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:13.391 03:58:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:13.391 03:58:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:13.391 03:58:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:13.391 03:58:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.391 03:58:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.391 03:58:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:13.391 03:58:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:13.391 03:58:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.392 03:58:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.392 03:58:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:13.392 1+0 records in 00:07:13.392 1+0 records out 00:07:13.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019882 s, 20.6 MB/s 00:07:13.392 03:58:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:13.652 03:58:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:13.652 03:58:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:13.652 03:58:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.652 03:58:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:13.652 03:58:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.652 03:58:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.652 03:58:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:13.652 /dev/nbd1 00:07:13.652 03:58:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:13.652 03:58:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:13.652 03:58:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:13.652 03:58:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:13.652 03:58:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.652 03:58:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.652 03:58:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:13.652 03:58:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:13.652 03:58:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.652 03:58:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.652 03:58:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:13.652 1+0 records in 00:07:13.652 1+0 records out 00:07:13.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468009 s, 8.8 MB/s 00:07:13.652 03:58:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:13.912 03:58:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:13.912 03:58:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:13.912 03:58:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.912 03:58:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:13.912 03:58:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.912 03:58:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.912 03:58:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.912 03:58:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.912 03:58:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.912 03:58:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:13.912 { 00:07:13.912 "nbd_device": "/dev/nbd0", 00:07:13.912 "bdev_name": "Malloc0" 00:07:13.912 }, 00:07:13.912 { 00:07:13.912 "nbd_device": "/dev/nbd1", 00:07:13.912 "bdev_name": "Malloc1" 00:07:13.912 } 00:07:13.912 ]' 00:07:13.912 03:58:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.912 03:58:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:13.912 { 00:07:13.912 "nbd_device": "/dev/nbd0", 00:07:13.912 "bdev_name": "Malloc0" 00:07:13.913 }, 00:07:13.913 { 00:07:13.913 "nbd_device": "/dev/nbd1", 00:07:13.913 "bdev_name": "Malloc1" 00:07:13.913 } 00:07:13.913 ]' 00:07:14.173 03:58:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:14.173 /dev/nbd1' 00:07:14.173 03:58:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:14.173 /dev/nbd1' 00:07:14.173 03:58:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:14.173 03:58:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:14.173 03:58:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:14.173 03:58:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:14.174 256+0 records in 00:07:14.174 256+0 records out 00:07:14.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00776244 s, 135 MB/s 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:14.174 256+0 records in 00:07:14.174 256+0 records out 00:07:14.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213653 s, 49.1 MB/s 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:14.174 256+0 records in 00:07:14.174 256+0 records out 00:07:14.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252273 s, 41.6 MB/s 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.174 03:58:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:14.433 03:58:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:14.433 03:58:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:14.433 03:58:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:14.433 03:58:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.433 03:58:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.433 03:58:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:14.433 03:58:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:14.433 03:58:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.433 03:58:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.433 03:58:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:14.691 03:58:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:14.691 03:58:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:14.691 03:58:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:14.691 03:58:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.691 03:58:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.691 03:58:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:14.691 03:58:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:14.691 03:58:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.691 03:58:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:14.691 03:58:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.691 03:58:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.949 03:58:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:14.949 03:58:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:14.949 03:58:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:14.949 03:58:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:14.949 03:58:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:14.949 03:58:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:14.949 03:58:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:14.949 03:58:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:14.949 03:58:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:14.949 03:58:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:14.949 03:58:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:14.949 03:58:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:14.949 03:58:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:15.208 03:58:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:16.586 [2024-12-06 03:58:09.722679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:16.586 [2024-12-06 03:58:09.837608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.586 [2024-12-06 03:58:09.837610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.845 [2024-12-06 03:58:10.035890] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:16.845 [2024-12-06 03:58:10.035997] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:18.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:18.224 03:58:11 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58492 /var/tmp/spdk-nbd.sock 00:07:18.224 03:58:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58492 ']' 00:07:18.224 03:58:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:18.224 03:58:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.224 03:58:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:18.224 03:58:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.224 03:58:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:18.483 03:58:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.483 03:58:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:18.483 03:58:11 event.app_repeat -- event/event.sh@39 -- # killprocess 58492 00:07:18.483 03:58:11 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58492 ']' 00:07:18.483 03:58:11 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58492 00:07:18.483 03:58:11 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:18.483 03:58:11 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.483 03:58:11 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58492 00:07:18.483 killing process with pid 58492 00:07:18.483 03:58:11 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.483 03:58:11 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.483 03:58:11 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58492' 00:07:18.483 03:58:11 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58492 00:07:18.483 03:58:11 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58492 00:07:19.859 spdk_app_start is called in Round 0. 00:07:19.859 Shutdown signal received, stop current app iteration 00:07:19.859 Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 reinitialization... 00:07:19.859 spdk_app_start is called in Round 1. 00:07:19.859 Shutdown signal received, stop current app iteration 00:07:19.859 Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 reinitialization... 00:07:19.859 spdk_app_start is called in Round 2. 00:07:19.859 Shutdown signal received, stop current app iteration 00:07:19.859 Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 reinitialization... 00:07:19.859 spdk_app_start is called in Round 3. 00:07:19.859 Shutdown signal received, stop current app iteration 00:07:19.859 03:58:12 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:19.859 03:58:12 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:19.859 00:07:19.859 real 0m19.588s 00:07:19.859 user 0m42.109s 00:07:19.859 sys 0m2.789s 00:07:19.859 03:58:12 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.859 03:58:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:19.859 ************************************ 00:07:19.859 END TEST app_repeat 00:07:19.859 ************************************ 00:07:19.859 03:58:12 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:19.859 03:58:12 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:19.859 03:58:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.859 03:58:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.859 03:58:12 event -- common/autotest_common.sh@10 -- # set +x 00:07:19.859 ************************************ 00:07:19.859 START TEST cpu_locks 00:07:19.859 ************************************ 00:07:19.859 03:58:12 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:19.859 * Looking for test storage... 00:07:19.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:19.859 03:58:13 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:19.859 03:58:13 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:19.859 03:58:13 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:07:19.859 03:58:13 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.859 03:58:13 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:19.859 03:58:13 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.859 03:58:13 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:19.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.859 --rc genhtml_branch_coverage=1 00:07:19.859 --rc genhtml_function_coverage=1 00:07:19.859 --rc genhtml_legend=1 00:07:19.859 --rc geninfo_all_blocks=1 00:07:19.859 --rc geninfo_unexecuted_blocks=1 00:07:19.859 00:07:19.859 ' 00:07:19.859 03:58:13 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:19.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.859 --rc genhtml_branch_coverage=1 00:07:19.859 --rc genhtml_function_coverage=1 00:07:19.859 --rc genhtml_legend=1 00:07:19.859 --rc geninfo_all_blocks=1 00:07:19.859 --rc geninfo_unexecuted_blocks=1 00:07:19.859 00:07:19.859 ' 00:07:19.859 03:58:13 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:19.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.859 --rc genhtml_branch_coverage=1 00:07:19.859 --rc genhtml_function_coverage=1 00:07:19.859 --rc genhtml_legend=1 00:07:19.859 --rc geninfo_all_blocks=1 00:07:19.859 --rc geninfo_unexecuted_blocks=1 00:07:19.859 00:07:19.859 ' 00:07:19.859 03:58:13 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:19.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.859 --rc genhtml_branch_coverage=1 00:07:19.859 --rc genhtml_function_coverage=1 00:07:19.859 --rc genhtml_legend=1 00:07:19.859 --rc geninfo_all_blocks=1 00:07:19.859 --rc geninfo_unexecuted_blocks=1 00:07:19.859 00:07:19.859 ' 00:07:19.859 03:58:13 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:19.859 03:58:13 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:19.859 03:58:13 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:19.859 03:58:13 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:19.859 03:58:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.859 03:58:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.859 03:58:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.859 ************************************ 00:07:19.859 START TEST default_locks 00:07:19.859 ************************************ 00:07:19.859 03:58:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:19.859 03:58:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:19.859 03:58:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58939 00:07:19.859 03:58:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58939 00:07:19.859 03:58:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58939 ']' 00:07:19.859 03:58:13 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.859 03:58:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.859 03:58:13 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.859 03:58:13 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.859 03:58:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.132 [2024-12-06 03:58:13.240037] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:07:20.132 [2024-12-06 03:58:13.240192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58939 ] 00:07:20.132 [2024-12-06 03:58:13.413591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.390 [2024-12-06 03:58:13.525263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.337 03:58:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.337 03:58:14 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:21.337 03:58:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58939 00:07:21.337 03:58:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58939 00:07:21.337 03:58:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:21.337 03:58:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58939 00:07:21.337 03:58:14 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58939 ']' 00:07:21.337 03:58:14 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58939 00:07:21.337 03:58:14 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:21.337 03:58:14 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.337 03:58:14 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58939 00:07:21.597 03:58:14 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.597 03:58:14 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.597 killing process with pid 58939 00:07:21.597 03:58:14 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58939' 00:07:21.597 03:58:14 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58939 00:07:21.597 03:58:14 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58939 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58939 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58939 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58939 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58939 ']' 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.131 ERROR: process (pid: 58939) is no longer running 00:07:24.131 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58939) - No such process 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:24.131 00:07:24.131 real 0m4.002s 00:07:24.131 user 0m3.938s 00:07:24.131 sys 0m0.615s 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.131 03:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.131 ************************************ 00:07:24.131 END TEST default_locks 00:07:24.131 ************************************ 00:07:24.131 03:58:17 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:24.131 03:58:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.131 03:58:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.131 03:58:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.131 ************************************ 00:07:24.131 START TEST default_locks_via_rpc 00:07:24.131 ************************************ 00:07:24.131 03:58:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:24.131 03:58:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59016 00:07:24.131 03:58:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59016 00:07:24.131 03:58:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59016 ']' 00:07:24.131 03:58:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.131 03:58:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.131 03:58:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.131 03:58:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:24.131 03:58:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.131 03:58:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.131 [2024-12-06 03:58:17.311514] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:07:24.131 [2024-12-06 03:58:17.311667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59016 ] 00:07:24.390 [2024-12-06 03:58:17.485799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.390 [2024-12-06 03:58:17.608230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.327 03:58:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.327 03:58:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:25.327 03:58:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:25.327 03:58:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.327 03:58:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.327 03:58:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.327 03:58:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:25.327 03:58:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:25.327 03:58:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:25.327 03:58:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:25.327 03:58:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:25.327 03:58:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.327 03:58:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.327 03:58:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.327 03:58:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59016 00:07:25.327 03:58:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59016 00:07:25.327 03:58:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:25.585 03:58:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59016 00:07:25.585 03:58:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59016 ']' 00:07:25.585 03:58:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59016 00:07:25.585 03:58:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:25.585 03:58:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.585 03:58:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59016 00:07:25.585 killing process with pid 59016 00:07:25.585 03:58:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.585 03:58:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.585 03:58:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59016' 00:07:25.585 03:58:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59016 00:07:25.585 03:58:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59016 00:07:28.186 ************************************ 00:07:28.186 END TEST default_locks_via_rpc 00:07:28.186 ************************************ 00:07:28.186 00:07:28.186 real 0m4.184s 00:07:28.186 user 0m4.137s 00:07:28.186 sys 0m0.625s 00:07:28.186 03:58:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.186 03:58:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.186 03:58:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:28.186 03:58:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.186 03:58:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.186 03:58:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.186 ************************************ 00:07:28.186 START TEST non_locking_app_on_locked_coremask 00:07:28.186 ************************************ 00:07:28.186 03:58:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:28.186 03:58:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59090 00:07:28.186 03:58:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59090 /var/tmp/spdk.sock 00:07:28.186 03:58:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59090 ']' 00:07:28.186 03:58:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.186 03:58:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.186 03:58:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.186 03:58:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.186 03:58:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.186 03:58:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:28.445 [2024-12-06 03:58:21.546647] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:07:28.445 [2024-12-06 03:58:21.546796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59090 ] 00:07:28.445 [2024-12-06 03:58:21.704473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.703 [2024-12-06 03:58:21.821853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.641 03:58:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.641 03:58:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:29.641 03:58:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59112 00:07:29.641 03:58:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:29.641 03:58:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59112 /var/tmp/spdk2.sock 00:07:29.641 03:58:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59112 ']' 00:07:29.641 03:58:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.641 03:58:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.641 03:58:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.641 03:58:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.641 03:58:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.641 [2024-12-06 03:58:22.747209] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:07:29.641 [2024-12-06 03:58:22.747330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59112 ] 00:07:29.641 [2024-12-06 03:58:22.913625] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:29.641 [2024-12-06 03:58:22.913683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.900 [2024-12-06 03:58:23.134655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.476 03:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.476 03:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:32.476 03:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59090 00:07:32.476 03:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:32.476 03:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59090 00:07:32.476 03:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59090 00:07:32.476 03:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59090 ']' 00:07:32.476 03:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59090 00:07:32.476 03:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:32.476 03:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.477 03:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59090 00:07:32.477 03:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.477 03:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.477 killing process with pid 59090 00:07:32.477 03:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59090' 00:07:32.477 03:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59090 00:07:32.477 03:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59090 00:07:37.762 03:58:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59112 00:07:37.762 03:58:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59112 ']' 00:07:37.762 03:58:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59112 00:07:37.762 03:58:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:37.762 03:58:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.762 03:58:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59112 00:07:37.762 03:58:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.762 03:58:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.762 killing process with pid 59112 00:07:37.762 03:58:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59112' 00:07:37.762 03:58:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59112 00:07:37.762 03:58:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59112 00:07:39.668 00:07:39.668 real 0m11.568s 00:07:39.668 user 0m11.822s 00:07:39.668 sys 0m1.160s 00:07:39.668 03:58:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.668 03:58:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:39.668 ************************************ 00:07:39.668 END TEST non_locking_app_on_locked_coremask 00:07:39.668 ************************************ 00:07:39.927 03:58:33 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:39.927 03:58:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.927 03:58:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.927 03:58:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.927 ************************************ 00:07:39.927 START TEST locking_app_on_unlocked_coremask 00:07:39.927 ************************************ 00:07:39.927 03:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:39.927 03:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59262 00:07:39.927 03:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:39.927 03:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59262 /var/tmp/spdk.sock 00:07:39.927 03:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59262 ']' 00:07:39.927 03:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.927 03:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.927 03:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.927 03:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.927 03:58:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:39.927 [2024-12-06 03:58:33.172905] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:07:39.927 [2024-12-06 03:58:33.173037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59262 ] 00:07:40.186 [2024-12-06 03:58:33.349411] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:40.186 [2024-12-06 03:58:33.349558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.186 [2024-12-06 03:58:33.472649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.123 03:58:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.123 03:58:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:41.123 03:58:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59279 00:07:41.123 03:58:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:41.123 03:58:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59279 /var/tmp/spdk2.sock 00:07:41.123 03:58:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59279 ']' 00:07:41.123 03:58:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:41.123 03:58:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.123 03:58:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:41.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:41.123 03:58:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.123 03:58:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.123 [2024-12-06 03:58:34.467798] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:07:41.123 [2024-12-06 03:58:34.468295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59279 ] 00:07:41.383 [2024-12-06 03:58:34.636822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.648 [2024-12-06 03:58:34.863503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.192 03:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.192 03:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:44.192 03:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59279 00:07:44.192 03:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59279 00:07:44.192 03:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:44.451 03:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59262 00:07:44.451 03:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59262 ']' 00:07:44.451 03:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59262 00:07:44.451 03:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:44.451 03:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.451 03:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59262 00:07:44.451 03:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.451 03:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.451 killing process with pid 59262 00:07:44.451 03:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59262' 00:07:44.451 03:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59262 00:07:44.451 03:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59262 00:07:49.722 03:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59279 00:07:49.722 03:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59279 ']' 00:07:49.722 03:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59279 00:07:49.722 03:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:49.722 03:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.722 03:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59279 00:07:49.722 03:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.722 03:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.722 killing process with pid 59279 00:07:49.722 03:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59279' 00:07:49.722 03:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59279 00:07:49.722 03:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59279 00:07:51.628 00:07:51.628 real 0m11.829s 00:07:51.628 user 0m12.151s 00:07:51.628 sys 0m1.284s 00:07:51.628 03:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.628 03:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:51.628 ************************************ 00:07:51.628 END TEST locking_app_on_unlocked_coremask 00:07:51.628 ************************************ 00:07:51.628 03:58:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:51.628 03:58:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.628 03:58:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.628 03:58:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:51.628 ************************************ 00:07:51.628 START TEST locking_app_on_locked_coremask 00:07:51.628 ************************************ 00:07:51.628 03:58:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:51.628 03:58:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59427 00:07:51.628 03:58:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:51.628 03:58:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59427 /var/tmp/spdk.sock 00:07:51.628 03:58:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59427 ']' 00:07:51.628 03:58:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.629 03:58:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.629 03:58:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.629 03:58:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.629 03:58:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:51.888 [2024-12-06 03:58:45.064943] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:07:51.888 [2024-12-06 03:58:45.065097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59427 ] 00:07:51.888 [2024-12-06 03:58:45.229665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.148 [2024-12-06 03:58:45.348119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.165 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.165 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:53.165 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59443 00:07:53.165 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59443 /var/tmp/spdk2.sock 00:07:53.165 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:53.165 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:53.165 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59443 /var/tmp/spdk2.sock 00:07:53.165 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:53.165 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.165 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:53.165 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.165 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59443 /var/tmp/spdk2.sock 00:07:53.165 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59443 ']' 00:07:53.165 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:53.165 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:53.165 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:53.165 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.165 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:53.165 [2024-12-06 03:58:46.348199] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:07:53.166 [2024-12-06 03:58:46.348644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59443 ] 00:07:53.424 [2024-12-06 03:58:46.516551] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59427 has claimed it. 00:07:53.424 [2024-12-06 03:58:46.516618] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:53.684 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59443) - No such process 00:07:53.684 ERROR: process (pid: 59443) is no longer running 00:07:53.684 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.684 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:53.684 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:53.684 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:53.684 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:53.684 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:53.684 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59427 00:07:53.684 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59427 00:07:53.684 03:58:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:53.943 03:58:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59427 00:07:53.943 03:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59427 ']' 00:07:53.943 03:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59427 00:07:53.943 03:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:53.943 03:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.943 03:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59427 00:07:53.943 03:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.943 03:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.943 killing process with pid 59427 00:07:53.943 03:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59427' 00:07:53.943 03:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59427 00:07:53.943 03:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59427 00:07:56.473 00:07:56.473 real 0m4.660s 00:07:56.473 user 0m4.857s 00:07:56.473 sys 0m0.669s 00:07:56.473 ************************************ 00:07:56.473 END TEST locking_app_on_locked_coremask 00:07:56.473 ************************************ 00:07:56.473 03:58:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.473 03:58:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:56.473 03:58:49 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:56.473 03:58:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.473 03:58:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.473 03:58:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:56.473 ************************************ 00:07:56.473 START TEST locking_overlapped_coremask 00:07:56.473 ************************************ 00:07:56.473 03:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:56.473 03:58:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59513 00:07:56.473 03:58:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:56.473 03:58:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59513 /var/tmp/spdk.sock 00:07:56.473 03:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59513 ']' 00:07:56.473 03:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.473 03:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.473 03:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.473 03:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.473 03:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:56.473 [2024-12-06 03:58:49.790501] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:07:56.473 [2024-12-06 03:58:49.790693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59513 ] 00:07:56.731 [2024-12-06 03:58:49.955603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:56.731 [2024-12-06 03:58:50.066380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.731 [2024-12-06 03:58:50.066392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.731 [2024-12-06 03:58:50.066381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.687 03:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.687 03:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:57.687 03:58:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59534 00:07:57.687 03:58:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:57.687 03:58:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59534 /var/tmp/spdk2.sock 00:07:57.687 03:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:57.687 03:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59534 /var/tmp/spdk2.sock 00:07:57.687 03:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:57.687 03:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.687 03:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:57.687 03:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.687 03:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59534 /var/tmp/spdk2.sock 00:07:57.687 03:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59534 ']' 00:07:57.687 03:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:57.687 03:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.687 03:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:57.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:57.687 03:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.688 03:58:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:57.688 [2024-12-06 03:58:50.997817] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:07:57.688 [2024-12-06 03:58:50.997988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59534 ] 00:07:57.946 [2024-12-06 03:58:51.175556] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59513 has claimed it. 00:07:57.946 [2024-12-06 03:58:51.175635] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:58.520 ERROR: process (pid: 59534) is no longer running 00:07:58.520 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59534) - No such process 00:07:58.520 03:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.520 03:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:58.520 03:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:58.520 03:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:58.520 03:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:58.520 03:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:58.520 03:58:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:58.520 03:58:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:58.520 03:58:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:58.520 03:58:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:58.520 03:58:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59513 00:07:58.520 03:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59513 ']' 00:07:58.520 03:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59513 00:07:58.520 03:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:58.520 03:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.520 03:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59513 00:07:58.520 03:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.520 03:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.520 03:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59513' 00:07:58.520 killing process with pid 59513 00:07:58.520 03:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59513 00:07:58.520 03:58:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59513 00:08:01.058 00:08:01.058 real 0m4.589s 00:08:01.058 user 0m12.540s 00:08:01.058 sys 0m0.591s 00:08:01.058 03:58:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.058 03:58:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:01.058 ************************************ 00:08:01.058 END TEST locking_overlapped_coremask 00:08:01.058 ************************************ 00:08:01.059 03:58:54 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:01.059 03:58:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.059 03:58:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.059 03:58:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:01.059 ************************************ 00:08:01.059 START TEST locking_overlapped_coremask_via_rpc 00:08:01.059 ************************************ 00:08:01.059 03:58:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:08:01.059 03:58:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59603 00:08:01.059 03:58:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:01.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.059 03:58:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59603 /var/tmp/spdk.sock 00:08:01.059 03:58:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59603 ']' 00:08:01.059 03:58:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.059 03:58:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.059 03:58:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.059 03:58:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.059 03:58:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.318 [2024-12-06 03:58:54.451129] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:08:01.318 [2024-12-06 03:58:54.451244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59603 ] 00:08:01.318 [2024-12-06 03:58:54.608019] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:01.318 [2024-12-06 03:58:54.608088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:01.577 [2024-12-06 03:58:54.726882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.577 [2024-12-06 03:58:54.727040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.577 [2024-12-06 03:58:54.727120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.529 03:58:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.529 03:58:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:02.529 03:58:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59627 00:08:02.529 03:58:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:02.529 03:58:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59627 /var/tmp/spdk2.sock 00:08:02.529 03:58:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59627 ']' 00:08:02.529 03:58:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:02.529 03:58:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.529 03:58:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:02.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:02.529 03:58:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.529 03:58:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.529 [2024-12-06 03:58:55.675985] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:08:02.529 [2024-12-06 03:58:55.676267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59627 ] 00:08:02.529 [2024-12-06 03:58:55.856958] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:02.529 [2024-12-06 03:58:55.857023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:02.790 [2024-12-06 03:58:56.110619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.790 [2024-12-06 03:58:56.110750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.790 [2024-12-06 03:58:56.110786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.344 [2024-12-06 03:58:58.293231] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59603 has claimed it. 00:08:05.344 request: 00:08:05.344 { 00:08:05.344 "method": "framework_enable_cpumask_locks", 00:08:05.344 "req_id": 1 00:08:05.344 } 00:08:05.344 Got JSON-RPC error response 00:08:05.344 response: 00:08:05.344 { 00:08:05.344 "code": -32603, 00:08:05.344 "message": "Failed to claim CPU core: 2" 00:08:05.344 } 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59603 /var/tmp/spdk.sock 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59603 ']' 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59627 /var/tmp/spdk2.sock 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59627 ']' 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:05.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.344 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.605 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.605 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:05.605 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:05.605 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:05.605 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:05.605 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:05.605 00:08:05.605 real 0m4.448s 00:08:05.605 user 0m1.420s 00:08:05.605 sys 0m0.195s 00:08:05.605 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.605 03:58:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.605 ************************************ 00:08:05.605 END TEST locking_overlapped_coremask_via_rpc 00:08:05.605 ************************************ 00:08:05.605 03:58:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:05.605 03:58:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59603 ]] 00:08:05.605 03:58:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59603 00:08:05.605 03:58:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59603 ']' 00:08:05.605 03:58:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59603 00:08:05.605 03:58:58 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:05.605 03:58:58 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.605 03:58:58 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59603 00:08:05.605 killing process with pid 59603 00:08:05.605 03:58:58 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:05.605 03:58:58 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:05.605 03:58:58 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59603' 00:08:05.605 03:58:58 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59603 00:08:05.605 03:58:58 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59603 00:08:08.898 03:59:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59627 ]] 00:08:08.898 03:59:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59627 00:08:08.898 03:59:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59627 ']' 00:08:08.898 03:59:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59627 00:08:08.898 03:59:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:08.898 03:59:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.898 03:59:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59627 00:08:08.898 03:59:01 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:08.898 03:59:01 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:08.898 killing process with pid 59627 00:08:08.898 03:59:01 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59627' 00:08:08.898 03:59:01 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59627 00:08:08.898 03:59:01 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59627 00:08:10.807 03:59:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:10.807 03:59:04 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:10.807 Process with pid 59603 is not found 00:08:10.807 Process with pid 59627 is not found 00:08:10.807 03:59:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59603 ]] 00:08:10.807 03:59:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59603 00:08:10.807 03:59:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59603 ']' 00:08:10.807 03:59:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59603 00:08:10.807 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59603) - No such process 00:08:10.807 03:59:04 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59603 is not found' 00:08:10.807 03:59:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59627 ]] 00:08:10.807 03:59:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59627 00:08:10.807 03:59:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59627 ']' 00:08:10.807 03:59:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59627 00:08:10.807 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59627) - No such process 00:08:10.807 03:59:04 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59627 is not found' 00:08:10.807 03:59:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:10.807 00:08:10.807 real 0m51.204s 00:08:10.807 user 1m29.082s 00:08:10.807 sys 0m6.339s 00:08:10.807 03:59:04 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.807 03:59:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:10.807 ************************************ 00:08:10.807 END TEST cpu_locks 00:08:10.807 ************************************ 00:08:11.068 00:08:11.068 real 1m23.139s 00:08:11.068 user 2m33.694s 00:08:11.068 sys 0m10.262s 00:08:11.068 03:59:04 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.068 03:59:04 event -- common/autotest_common.sh@10 -- # set +x 00:08:11.068 ************************************ 00:08:11.068 END TEST event 00:08:11.068 ************************************ 00:08:11.068 03:59:04 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:11.068 03:59:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.068 03:59:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.068 03:59:04 -- common/autotest_common.sh@10 -- # set +x 00:08:11.068 ************************************ 00:08:11.068 START TEST thread 00:08:11.068 ************************************ 00:08:11.068 03:59:04 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:11.068 * Looking for test storage... 00:08:11.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:11.068 03:59:04 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:11.068 03:59:04 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:08:11.068 03:59:04 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:11.329 03:59:04 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:11.329 03:59:04 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.329 03:59:04 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.329 03:59:04 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.329 03:59:04 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.329 03:59:04 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.329 03:59:04 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.329 03:59:04 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.329 03:59:04 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.329 03:59:04 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.329 03:59:04 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.329 03:59:04 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.329 03:59:04 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:11.329 03:59:04 thread -- scripts/common.sh@345 -- # : 1 00:08:11.329 03:59:04 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.329 03:59:04 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.329 03:59:04 thread -- scripts/common.sh@365 -- # decimal 1 00:08:11.329 03:59:04 thread -- scripts/common.sh@353 -- # local d=1 00:08:11.329 03:59:04 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.329 03:59:04 thread -- scripts/common.sh@355 -- # echo 1 00:08:11.329 03:59:04 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.329 03:59:04 thread -- scripts/common.sh@366 -- # decimal 2 00:08:11.329 03:59:04 thread -- scripts/common.sh@353 -- # local d=2 00:08:11.329 03:59:04 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.329 03:59:04 thread -- scripts/common.sh@355 -- # echo 2 00:08:11.329 03:59:04 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.329 03:59:04 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.329 03:59:04 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.329 03:59:04 thread -- scripts/common.sh@368 -- # return 0 00:08:11.329 03:59:04 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.329 03:59:04 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:11.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.329 --rc genhtml_branch_coverage=1 00:08:11.329 --rc genhtml_function_coverage=1 00:08:11.329 --rc genhtml_legend=1 00:08:11.329 --rc geninfo_all_blocks=1 00:08:11.329 --rc geninfo_unexecuted_blocks=1 00:08:11.329 00:08:11.329 ' 00:08:11.329 03:59:04 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:11.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.329 --rc genhtml_branch_coverage=1 00:08:11.329 --rc genhtml_function_coverage=1 00:08:11.329 --rc genhtml_legend=1 00:08:11.329 --rc geninfo_all_blocks=1 00:08:11.329 --rc geninfo_unexecuted_blocks=1 00:08:11.329 00:08:11.329 ' 00:08:11.329 03:59:04 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:11.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.329 --rc genhtml_branch_coverage=1 00:08:11.329 --rc genhtml_function_coverage=1 00:08:11.329 --rc genhtml_legend=1 00:08:11.329 --rc geninfo_all_blocks=1 00:08:11.329 --rc geninfo_unexecuted_blocks=1 00:08:11.329 00:08:11.329 ' 00:08:11.329 03:59:04 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:11.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.329 --rc genhtml_branch_coverage=1 00:08:11.329 --rc genhtml_function_coverage=1 00:08:11.329 --rc genhtml_legend=1 00:08:11.329 --rc geninfo_all_blocks=1 00:08:11.329 --rc geninfo_unexecuted_blocks=1 00:08:11.329 00:08:11.329 ' 00:08:11.329 03:59:04 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:11.329 03:59:04 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:11.329 03:59:04 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.329 03:59:04 thread -- common/autotest_common.sh@10 -- # set +x 00:08:11.329 ************************************ 00:08:11.329 START TEST thread_poller_perf 00:08:11.329 ************************************ 00:08:11.329 03:59:04 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:11.329 [2024-12-06 03:59:04.537798] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:08:11.329 [2024-12-06 03:59:04.537960] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59822 ] 00:08:11.590 [2024-12-06 03:59:04.711669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.590 [2024-12-06 03:59:04.830027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.590 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:12.970 [2024-12-06T03:59:06.324Z] ====================================== 00:08:12.970 [2024-12-06T03:59:06.324Z] busy:2297707740 (cyc) 00:08:12.970 [2024-12-06T03:59:06.324Z] total_run_count: 397000 00:08:12.970 [2024-12-06T03:59:06.324Z] tsc_hz: 2290000000 (cyc) 00:08:12.970 [2024-12-06T03:59:06.324Z] ====================================== 00:08:12.970 [2024-12-06T03:59:06.324Z] poller_cost: 5787 (cyc), 2527 (nsec) 00:08:12.970 00:08:12.970 ************************************ 00:08:12.970 END TEST thread_poller_perf 00:08:12.970 ************************************ 00:08:12.970 real 0m1.573s 00:08:12.970 user 0m1.364s 00:08:12.970 sys 0m0.102s 00:08:12.970 03:59:06 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.971 03:59:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:12.971 03:59:06 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:12.971 03:59:06 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:12.971 03:59:06 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.971 03:59:06 thread -- common/autotest_common.sh@10 -- # set +x 00:08:12.971 ************************************ 00:08:12.971 START TEST thread_poller_perf 00:08:12.971 ************************************ 00:08:12.971 03:59:06 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:12.971 [2024-12-06 03:59:06.174681] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:08:12.971 [2024-12-06 03:59:06.174790] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59864 ] 00:08:13.230 [2024-12-06 03:59:06.346299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.230 [2024-12-06 03:59:06.456011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.230 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:14.609 [2024-12-06T03:59:07.963Z] ====================================== 00:08:14.609 [2024-12-06T03:59:07.963Z] busy:2293609732 (cyc) 00:08:14.609 [2024-12-06T03:59:07.963Z] total_run_count: 4822000 00:08:14.609 [2024-12-06T03:59:07.963Z] tsc_hz: 2290000000 (cyc) 00:08:14.609 [2024-12-06T03:59:07.963Z] ====================================== 00:08:14.609 [2024-12-06T03:59:07.963Z] poller_cost: 475 (cyc), 207 (nsec) 00:08:14.609 ************************************ 00:08:14.609 END TEST thread_poller_perf 00:08:14.609 ************************************ 00:08:14.609 00:08:14.609 real 0m1.553s 00:08:14.609 user 0m1.358s 00:08:14.609 sys 0m0.089s 00:08:14.610 03:59:07 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.610 03:59:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:14.610 03:59:07 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:14.610 00:08:14.610 real 0m3.477s 00:08:14.610 user 0m2.886s 00:08:14.610 sys 0m0.394s 00:08:14.610 ************************************ 00:08:14.610 END TEST thread 00:08:14.610 ************************************ 00:08:14.610 03:59:07 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.610 03:59:07 thread -- common/autotest_common.sh@10 -- # set +x 00:08:14.610 03:59:07 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:14.610 03:59:07 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:14.610 03:59:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.610 03:59:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.610 03:59:07 -- common/autotest_common.sh@10 -- # set +x 00:08:14.610 ************************************ 00:08:14.610 START TEST app_cmdline 00:08:14.610 ************************************ 00:08:14.610 03:59:07 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:14.610 * Looking for test storage... 00:08:14.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:14.610 03:59:07 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:14.610 03:59:07 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:08:14.610 03:59:07 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:14.869 03:59:07 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:14.869 03:59:07 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.869 03:59:07 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.869 03:59:07 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.869 03:59:07 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.869 03:59:07 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.869 03:59:07 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.869 03:59:07 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.869 03:59:07 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.869 03:59:07 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.869 03:59:07 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.869 03:59:07 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.869 03:59:07 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:14.869 03:59:07 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:14.869 03:59:07 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.870 03:59:07 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.870 03:59:07 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:14.870 03:59:07 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:14.870 03:59:07 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.870 03:59:07 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:14.870 03:59:07 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.870 03:59:07 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:14.870 03:59:07 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:14.870 03:59:07 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.870 03:59:07 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:14.870 03:59:07 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.870 03:59:07 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.870 03:59:07 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.870 03:59:07 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:14.870 03:59:07 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.870 03:59:08 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:14.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.870 --rc genhtml_branch_coverage=1 00:08:14.870 --rc genhtml_function_coverage=1 00:08:14.870 --rc genhtml_legend=1 00:08:14.870 --rc geninfo_all_blocks=1 00:08:14.870 --rc geninfo_unexecuted_blocks=1 00:08:14.870 00:08:14.870 ' 00:08:14.870 03:59:08 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:14.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.870 --rc genhtml_branch_coverage=1 00:08:14.870 --rc genhtml_function_coverage=1 00:08:14.870 --rc genhtml_legend=1 00:08:14.870 --rc geninfo_all_blocks=1 00:08:14.870 --rc geninfo_unexecuted_blocks=1 00:08:14.870 00:08:14.870 ' 00:08:14.870 03:59:08 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:14.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.870 --rc genhtml_branch_coverage=1 00:08:14.870 --rc genhtml_function_coverage=1 00:08:14.870 --rc genhtml_legend=1 00:08:14.870 --rc geninfo_all_blocks=1 00:08:14.870 --rc geninfo_unexecuted_blocks=1 00:08:14.870 00:08:14.870 ' 00:08:14.870 03:59:08 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:14.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.870 --rc genhtml_branch_coverage=1 00:08:14.870 --rc genhtml_function_coverage=1 00:08:14.870 --rc genhtml_legend=1 00:08:14.870 --rc geninfo_all_blocks=1 00:08:14.870 --rc geninfo_unexecuted_blocks=1 00:08:14.870 00:08:14.870 ' 00:08:14.870 03:59:08 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:14.870 03:59:08 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59953 00:08:14.870 03:59:08 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:14.870 03:59:08 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59953 00:08:14.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.870 03:59:08 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59953 ']' 00:08:14.870 03:59:08 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.870 03:59:08 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.870 03:59:08 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.870 03:59:08 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.870 03:59:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:14.870 [2024-12-06 03:59:08.101151] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:08:14.870 [2024-12-06 03:59:08.101365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59953 ] 00:08:15.129 [2024-12-06 03:59:08.275108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.129 [2024-12-06 03:59:08.385468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.069 03:59:09 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.069 03:59:09 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:16.069 03:59:09 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:16.328 { 00:08:16.328 "version": "SPDK v25.01-pre git sha1 a4d2a837b", 00:08:16.328 "fields": { 00:08:16.328 "major": 25, 00:08:16.328 "minor": 1, 00:08:16.328 "patch": 0, 00:08:16.328 "suffix": "-pre", 00:08:16.328 "commit": "a4d2a837b" 00:08:16.328 } 00:08:16.328 } 00:08:16.328 03:59:09 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:16.328 03:59:09 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:16.328 03:59:09 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:16.328 03:59:09 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:16.328 03:59:09 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:16.328 03:59:09 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:16.328 03:59:09 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:16.328 03:59:09 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.328 03:59:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:16.328 03:59:09 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.328 03:59:09 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:16.328 03:59:09 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:16.328 03:59:09 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:16.328 03:59:09 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:16.328 03:59:09 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:16.328 03:59:09 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:16.328 03:59:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.328 03:59:09 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:16.328 03:59:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.328 03:59:09 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:16.328 03:59:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.328 03:59:09 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:16.328 03:59:09 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:16.328 03:59:09 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:16.587 request: 00:08:16.587 { 00:08:16.587 "method": "env_dpdk_get_mem_stats", 00:08:16.587 "req_id": 1 00:08:16.587 } 00:08:16.587 Got JSON-RPC error response 00:08:16.587 response: 00:08:16.587 { 00:08:16.587 "code": -32601, 00:08:16.587 "message": "Method not found" 00:08:16.587 } 00:08:16.587 03:59:09 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:16.587 03:59:09 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:16.587 03:59:09 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:16.587 03:59:09 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:16.587 03:59:09 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59953 00:08:16.587 03:59:09 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59953 ']' 00:08:16.587 03:59:09 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59953 00:08:16.587 03:59:09 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:16.587 03:59:09 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.587 03:59:09 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59953 00:08:16.587 03:59:09 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:16.587 03:59:09 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:16.587 03:59:09 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59953' 00:08:16.587 killing process with pid 59953 00:08:16.587 03:59:09 app_cmdline -- common/autotest_common.sh@973 -- # kill 59953 00:08:16.587 03:59:09 app_cmdline -- common/autotest_common.sh@978 -- # wait 59953 00:08:19.132 00:08:19.133 real 0m4.452s 00:08:19.133 user 0m4.659s 00:08:19.133 sys 0m0.619s 00:08:19.133 ************************************ 00:08:19.133 END TEST app_cmdline 00:08:19.133 ************************************ 00:08:19.133 03:59:12 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.133 03:59:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:19.133 03:59:12 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:19.133 03:59:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.133 03:59:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.133 03:59:12 -- common/autotest_common.sh@10 -- # set +x 00:08:19.133 ************************************ 00:08:19.133 START TEST version 00:08:19.133 ************************************ 00:08:19.133 03:59:12 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:19.133 * Looking for test storage... 00:08:19.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:19.133 03:59:12 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:19.133 03:59:12 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:19.133 03:59:12 version -- common/autotest_common.sh@1711 -- # lcov --version 00:08:19.391 03:59:12 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:19.391 03:59:12 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.391 03:59:12 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.391 03:59:12 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.391 03:59:12 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.391 03:59:12 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.391 03:59:12 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.391 03:59:12 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.391 03:59:12 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.391 03:59:12 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.391 03:59:12 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.391 03:59:12 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.391 03:59:12 version -- scripts/common.sh@344 -- # case "$op" in 00:08:19.391 03:59:12 version -- scripts/common.sh@345 -- # : 1 00:08:19.391 03:59:12 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.391 03:59:12 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.391 03:59:12 version -- scripts/common.sh@365 -- # decimal 1 00:08:19.391 03:59:12 version -- scripts/common.sh@353 -- # local d=1 00:08:19.391 03:59:12 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.391 03:59:12 version -- scripts/common.sh@355 -- # echo 1 00:08:19.391 03:59:12 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.391 03:59:12 version -- scripts/common.sh@366 -- # decimal 2 00:08:19.391 03:59:12 version -- scripts/common.sh@353 -- # local d=2 00:08:19.391 03:59:12 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.391 03:59:12 version -- scripts/common.sh@355 -- # echo 2 00:08:19.391 03:59:12 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.391 03:59:12 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.391 03:59:12 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.391 03:59:12 version -- scripts/common.sh@368 -- # return 0 00:08:19.391 03:59:12 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.391 03:59:12 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:19.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.391 --rc genhtml_branch_coverage=1 00:08:19.391 --rc genhtml_function_coverage=1 00:08:19.391 --rc genhtml_legend=1 00:08:19.391 --rc geninfo_all_blocks=1 00:08:19.391 --rc geninfo_unexecuted_blocks=1 00:08:19.391 00:08:19.391 ' 00:08:19.391 03:59:12 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:19.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.391 --rc genhtml_branch_coverage=1 00:08:19.391 --rc genhtml_function_coverage=1 00:08:19.391 --rc genhtml_legend=1 00:08:19.391 --rc geninfo_all_blocks=1 00:08:19.391 --rc geninfo_unexecuted_blocks=1 00:08:19.391 00:08:19.391 ' 00:08:19.391 03:59:12 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:19.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.391 --rc genhtml_branch_coverage=1 00:08:19.391 --rc genhtml_function_coverage=1 00:08:19.391 --rc genhtml_legend=1 00:08:19.391 --rc geninfo_all_blocks=1 00:08:19.391 --rc geninfo_unexecuted_blocks=1 00:08:19.391 00:08:19.391 ' 00:08:19.391 03:59:12 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:19.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.391 --rc genhtml_branch_coverage=1 00:08:19.391 --rc genhtml_function_coverage=1 00:08:19.391 --rc genhtml_legend=1 00:08:19.391 --rc geninfo_all_blocks=1 00:08:19.391 --rc geninfo_unexecuted_blocks=1 00:08:19.391 00:08:19.391 ' 00:08:19.391 03:59:12 version -- app/version.sh@17 -- # get_header_version major 00:08:19.391 03:59:12 version -- app/version.sh@14 -- # cut -f2 00:08:19.391 03:59:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:19.391 03:59:12 version -- app/version.sh@14 -- # tr -d '"' 00:08:19.391 03:59:12 version -- app/version.sh@17 -- # major=25 00:08:19.391 03:59:12 version -- app/version.sh@18 -- # get_header_version minor 00:08:19.391 03:59:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:19.391 03:59:12 version -- app/version.sh@14 -- # tr -d '"' 00:08:19.391 03:59:12 version -- app/version.sh@14 -- # cut -f2 00:08:19.391 03:59:12 version -- app/version.sh@18 -- # minor=1 00:08:19.391 03:59:12 version -- app/version.sh@19 -- # get_header_version patch 00:08:19.391 03:59:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:19.391 03:59:12 version -- app/version.sh@14 -- # cut -f2 00:08:19.391 03:59:12 version -- app/version.sh@14 -- # tr -d '"' 00:08:19.391 03:59:12 version -- app/version.sh@19 -- # patch=0 00:08:19.391 03:59:12 version -- app/version.sh@20 -- # get_header_version suffix 00:08:19.391 03:59:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:19.391 03:59:12 version -- app/version.sh@14 -- # cut -f2 00:08:19.391 03:59:12 version -- app/version.sh@14 -- # tr -d '"' 00:08:19.391 03:59:12 version -- app/version.sh@20 -- # suffix=-pre 00:08:19.391 03:59:12 version -- app/version.sh@22 -- # version=25.1 00:08:19.391 03:59:12 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:19.391 03:59:12 version -- app/version.sh@28 -- # version=25.1rc0 00:08:19.391 03:59:12 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:19.391 03:59:12 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:19.391 03:59:12 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:19.391 03:59:12 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:19.391 ************************************ 00:08:19.391 END TEST version 00:08:19.391 ************************************ 00:08:19.391 00:08:19.391 real 0m0.297s 00:08:19.391 user 0m0.180s 00:08:19.391 sys 0m0.160s 00:08:19.391 03:59:12 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.391 03:59:12 version -- common/autotest_common.sh@10 -- # set +x 00:08:19.391 03:59:12 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:19.391 03:59:12 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:08:19.391 03:59:12 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:19.391 03:59:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.391 03:59:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.391 03:59:12 -- common/autotest_common.sh@10 -- # set +x 00:08:19.391 ************************************ 00:08:19.391 START TEST bdev_raid 00:08:19.391 ************************************ 00:08:19.391 03:59:12 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:19.651 * Looking for test storage... 00:08:19.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:19.651 03:59:12 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:19.651 03:59:12 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:08:19.651 03:59:12 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:19.651 03:59:12 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@345 -- # : 1 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.651 03:59:12 bdev_raid -- scripts/common.sh@368 -- # return 0 00:08:19.651 03:59:12 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.651 03:59:12 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:19.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.651 --rc genhtml_branch_coverage=1 00:08:19.651 --rc genhtml_function_coverage=1 00:08:19.651 --rc genhtml_legend=1 00:08:19.651 --rc geninfo_all_blocks=1 00:08:19.651 --rc geninfo_unexecuted_blocks=1 00:08:19.651 00:08:19.651 ' 00:08:19.651 03:59:12 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:19.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.651 --rc genhtml_branch_coverage=1 00:08:19.651 --rc genhtml_function_coverage=1 00:08:19.651 --rc genhtml_legend=1 00:08:19.651 --rc geninfo_all_blocks=1 00:08:19.651 --rc geninfo_unexecuted_blocks=1 00:08:19.651 00:08:19.651 ' 00:08:19.651 03:59:12 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:19.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.651 --rc genhtml_branch_coverage=1 00:08:19.651 --rc genhtml_function_coverage=1 00:08:19.651 --rc genhtml_legend=1 00:08:19.651 --rc geninfo_all_blocks=1 00:08:19.651 --rc geninfo_unexecuted_blocks=1 00:08:19.651 00:08:19.651 ' 00:08:19.651 03:59:12 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:19.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.651 --rc genhtml_branch_coverage=1 00:08:19.651 --rc genhtml_function_coverage=1 00:08:19.651 --rc genhtml_legend=1 00:08:19.651 --rc geninfo_all_blocks=1 00:08:19.651 --rc geninfo_unexecuted_blocks=1 00:08:19.651 00:08:19.651 ' 00:08:19.651 03:59:12 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:19.651 03:59:12 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:08:19.651 03:59:12 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:08:19.651 03:59:12 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:08:19.651 03:59:12 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:08:19.651 03:59:12 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:08:19.651 03:59:12 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:08:19.651 03:59:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.651 03:59:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.651 03:59:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:19.651 ************************************ 00:08:19.651 START TEST raid1_resize_data_offset_test 00:08:19.651 ************************************ 00:08:19.651 Process raid pid: 60135 00:08:19.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.651 03:59:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:08:19.651 03:59:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60135 00:08:19.651 03:59:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60135' 00:08:19.651 03:59:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60135 00:08:19.652 03:59:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:19.652 03:59:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60135 ']' 00:08:19.652 03:59:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.652 03:59:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.652 03:59:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.652 03:59:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.652 03:59:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.652 [2024-12-06 03:59:12.975665] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:08:19.652 [2024-12-06 03:59:12.975863] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.909 [2024-12-06 03:59:13.152443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.168 [2024-12-06 03:59:13.271373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.168 [2024-12-06 03:59:13.477264] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.168 [2024-12-06 03:59:13.477378] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.733 03:59:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.733 03:59:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:08:20.733 03:59:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:08:20.733 03:59:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.733 03:59:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.733 malloc0 00:08:20.733 03:59:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.733 03:59:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:08:20.733 03:59:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.733 03:59:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.733 malloc1 00:08:20.733 03:59:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.733 03:59:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:08:20.733 03:59:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.733 03:59:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.733 null0 00:08:20.733 03:59:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.733 03:59:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:08:20.733 03:59:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.733 03:59:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.733 [2024-12-06 03:59:14.001578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:08:20.733 [2024-12-06 03:59:14.003807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:20.733 [2024-12-06 03:59:14.003936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:08:20.733 [2024-12-06 03:59:14.004170] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:20.734 [2024-12-06 03:59:14.004229] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:08:20.734 [2024-12-06 03:59:14.004537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:20.734 [2024-12-06 03:59:14.004743] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:20.734 [2024-12-06 03:59:14.004759] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:20.734 [2024-12-06 03:59:14.004938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.734 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.734 03:59:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.734 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.734 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.734 03:59:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:20.734 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.734 03:59:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:08:20.734 03:59:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:08:20.734 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.734 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.734 [2024-12-06 03:59:14.061494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:08:20.734 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.734 03:59:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:08:20.734 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.734 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.301 malloc2 00:08:21.301 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.301 03:59:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:08:21.301 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.301 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.301 [2024-12-06 03:59:14.638143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:21.560 [2024-12-06 03:59:14.656533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:21.560 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.560 [2024-12-06 03:59:14.658714] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:08:21.560 03:59:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.560 03:59:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:21.560 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.560 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.560 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.560 03:59:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:08:21.560 03:59:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60135 00:08:21.560 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60135 ']' 00:08:21.560 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60135 00:08:21.560 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:08:21.560 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.560 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60135 00:08:21.560 killing process with pid 60135 00:08:21.560 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:21.560 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:21.560 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60135' 00:08:21.560 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60135 00:08:21.560 03:59:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60135 00:08:21.560 [2024-12-06 03:59:14.748222] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:21.560 [2024-12-06 03:59:14.748616] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:08:21.560 [2024-12-06 03:59:14.748698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.560 [2024-12-06 03:59:14.748725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:08:21.560 [2024-12-06 03:59:14.785825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.560 [2024-12-06 03:59:14.786434] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.560 [2024-12-06 03:59:14.786476] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:23.474 [2024-12-06 03:59:16.705855] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:24.856 03:59:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:08:24.856 00:08:24.856 real 0m4.954s 00:08:24.856 user 0m4.868s 00:08:24.856 sys 0m0.524s 00:08:24.856 ************************************ 00:08:24.856 END TEST raid1_resize_data_offset_test 00:08:24.856 ************************************ 00:08:24.856 03:59:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.856 03:59:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.856 03:59:17 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:08:24.856 03:59:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:24.856 03:59:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.856 03:59:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:24.856 ************************************ 00:08:24.856 START TEST raid0_resize_superblock_test 00:08:24.856 ************************************ 00:08:24.856 03:59:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:08:24.856 03:59:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:08:24.856 03:59:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60224 00:08:24.856 03:59:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:24.856 03:59:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60224' 00:08:24.856 Process raid pid: 60224 00:08:24.856 03:59:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60224 00:08:24.856 03:59:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60224 ']' 00:08:24.856 03:59:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.856 03:59:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.856 03:59:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.856 03:59:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.856 03:59:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.856 [2024-12-06 03:59:17.994975] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:08:24.856 [2024-12-06 03:59:17.995134] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.856 [2024-12-06 03:59:18.168812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.120 [2024-12-06 03:59:18.283980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.380 [2024-12-06 03:59:18.486549] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.380 [2024-12-06 03:59:18.486680] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.639 03:59:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.639 03:59:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:25.639 03:59:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:25.639 03:59:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.639 03:59:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.208 malloc0 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.208 [2024-12-06 03:59:19.365850] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:26.208 [2024-12-06 03:59:19.365923] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.208 [2024-12-06 03:59:19.365958] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:26.208 [2024-12-06 03:59:19.365972] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.208 [2024-12-06 03:59:19.368164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.208 [2024-12-06 03:59:19.368264] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:26.208 pt0 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.208 47fd882b-01c8-4ab7-bc7f-578cc7b7a8a0 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.208 e56b7076-f255-4799-9850-c73411abab73 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.208 8c1a5847-9b55-4bf2-a027-925332bd9695 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.208 [2024-12-06 03:59:19.499201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e56b7076-f255-4799-9850-c73411abab73 is claimed 00:08:26.208 [2024-12-06 03:59:19.499353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8c1a5847-9b55-4bf2-a027-925332bd9695 is claimed 00:08:26.208 [2024-12-06 03:59:19.499522] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:26.208 [2024-12-06 03:59:19.499571] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:08:26.208 [2024-12-06 03:59:19.499845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:26.208 [2024-12-06 03:59:19.500093] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:26.208 [2024-12-06 03:59:19.500159] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:26.208 [2024-12-06 03:59:19.500352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.208 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.468 [2024-12-06 03:59:19.599345] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.468 [2024-12-06 03:59:19.623230] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:26.468 [2024-12-06 03:59:19.623300] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'e56b7076-f255-4799-9850-c73411abab73' was resized: old size 131072, new size 204800 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.468 [2024-12-06 03:59:19.635121] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:26.468 [2024-12-06 03:59:19.635144] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '8c1a5847-9b55-4bf2-a027-925332bd9695' was resized: old size 131072, new size 204800 00:08:26.468 [2024-12-06 03:59:19.635173] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.468 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.469 [2024-12-06 03:59:19.750968] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:26.469 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.469 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:26.469 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:26.469 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:08:26.469 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:26.469 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.469 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.469 [2024-12-06 03:59:19.778748] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:26.469 [2024-12-06 03:59:19.778901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:26.469 [2024-12-06 03:59:19.778941] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:26.469 [2024-12-06 03:59:19.779002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:26.469 [2024-12-06 03:59:19.779156] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.469 [2024-12-06 03:59:19.779222] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:26.469 [2024-12-06 03:59:19.779239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:26.469 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.469 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:26.469 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.469 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.469 [2024-12-06 03:59:19.790663] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:26.469 [2024-12-06 03:59:19.790758] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.469 [2024-12-06 03:59:19.790794] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:26.469 [2024-12-06 03:59:19.790823] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.469 [2024-12-06 03:59:19.793003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.469 [2024-12-06 03:59:19.793062] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:26.469 [2024-12-06 03:59:19.794731] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev e56b7076-f255-4799-9850-c73411abab73 00:08:26.469 [2024-12-06 03:59:19.794807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e56b7076-f255-4799-9850-c73411abab73 is claimed 00:08:26.469 [2024-12-06 03:59:19.794909] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 8c1a5847-9b55-4bf2-a027-925332bd9695 00:08:26.469 [2024-12-06 03:59:19.794927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8c1a5847-9b55-4bf2-a027-925332bd9695 is claimed 00:08:26.469 [2024-12-06 03:59:19.795111] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 8c1a5847-9b55-4bf2-a027-925332bd9695 (2) smaller than existing raid bdev Raid (3) 00:08:26.469 [2024-12-06 03:59:19.795136] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev e56b7076-f255-4799-9850-c73411abab73: File exists 00:08:26.469 [2024-12-06 03:59:19.795170] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:26.469 [2024-12-06 03:59:19.795181] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:08:26.469 pt0 00:08:26.469 [2024-12-06 03:59:19.795420] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:26.469 [2024-12-06 03:59:19.795575] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:26.469 [2024-12-06 03:59:19.795584] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:26.469 [2024-12-06 03:59:19.795723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.469 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.469 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:26.469 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.469 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.469 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.469 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:26.469 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:26.469 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.469 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.469 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:26.469 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:08:26.469 [2024-12-06 03:59:19.818956] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:26.728 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.728 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:26.728 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:26.728 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:08:26.728 03:59:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60224 00:08:26.728 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60224 ']' 00:08:26.728 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60224 00:08:26.728 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:26.728 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.728 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60224 00:08:26.728 killing process with pid 60224 00:08:26.728 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:26.728 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:26.728 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60224' 00:08:26.728 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60224 00:08:26.728 03:59:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60224 00:08:26.728 [2024-12-06 03:59:19.880275] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:26.728 [2024-12-06 03:59:19.880366] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.728 [2024-12-06 03:59:19.880442] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:26.728 [2024-12-06 03:59:19.880452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:28.105 [2024-12-06 03:59:21.305854] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:29.486 ************************************ 00:08:29.486 END TEST raid0_resize_superblock_test 00:08:29.486 ************************************ 00:08:29.486 03:59:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:29.486 00:08:29.486 real 0m4.521s 00:08:29.486 user 0m4.685s 00:08:29.486 sys 0m0.558s 00:08:29.486 03:59:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.486 03:59:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.486 03:59:22 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:08:29.486 03:59:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:29.486 03:59:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.486 03:59:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:29.486 ************************************ 00:08:29.486 START TEST raid1_resize_superblock_test 00:08:29.486 ************************************ 00:08:29.486 03:59:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:08:29.486 03:59:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:08:29.486 03:59:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60323 00:08:29.486 03:59:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:29.486 03:59:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60323' 00:08:29.486 Process raid pid: 60323 00:08:29.487 03:59:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60323 00:08:29.487 03:59:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60323 ']' 00:08:29.487 03:59:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.487 03:59:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.487 03:59:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.487 03:59:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.487 03:59:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.487 [2024-12-06 03:59:22.571595] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:08:29.487 [2024-12-06 03:59:22.571715] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.487 [2024-12-06 03:59:22.727067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.751 [2024-12-06 03:59:22.839847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.751 [2024-12-06 03:59:23.039201] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.751 [2024-12-06 03:59:23.039241] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.337 03:59:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.337 03:59:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:30.337 03:59:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:30.337 03:59:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.337 03:59:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.595 malloc0 00:08:30.595 03:59:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.595 03:59:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:30.595 03:59:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.595 03:59:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.855 [2024-12-06 03:59:23.949877] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:30.855 [2024-12-06 03:59:23.949980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.855 [2024-12-06 03:59:23.950020] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:30.855 [2024-12-06 03:59:23.950059] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.855 [2024-12-06 03:59:23.952095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.855 [2024-12-06 03:59:23.952170] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:30.855 pt0 00:08:30.855 03:59:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.855 03:59:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:30.855 03:59:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.855 03:59:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.855 b6df8a51-44f6-45bd-88d6-32f3e8a4b7f8 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.855 d2c23bcf-0988-4369-8915-65384e3c256a 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.855 cd4e1582-2a8c-4d77-86e9-3f22e62105a7 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.855 [2024-12-06 03:59:24.079010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d2c23bcf-0988-4369-8915-65384e3c256a is claimed 00:08:30.855 [2024-12-06 03:59:24.079171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev cd4e1582-2a8c-4d77-86e9-3f22e62105a7 is claimed 00:08:30.855 [2024-12-06 03:59:24.079334] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:30.855 [2024-12-06 03:59:24.079381] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:08:30.855 [2024-12-06 03:59:24.079662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:30.855 [2024-12-06 03:59:24.079885] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:30.855 [2024-12-06 03:59:24.079931] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:30.855 [2024-12-06 03:59:24.080130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.855 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.856 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:30.856 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:08:30.856 [2024-12-06 03:59:24.187063] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.856 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.115 [2024-12-06 03:59:24.222923] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:31.115 [2024-12-06 03:59:24.222950] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'd2c23bcf-0988-4369-8915-65384e3c256a' was resized: old size 131072, new size 204800 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.115 [2024-12-06 03:59:24.234840] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:31.115 [2024-12-06 03:59:24.234905] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'cd4e1582-2a8c-4d77-86e9-3f22e62105a7' was resized: old size 131072, new size 204800 00:08:31.115 [2024-12-06 03:59:24.234962] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:08:31.115 [2024-12-06 03:59:24.346769] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.115 [2024-12-06 03:59:24.394493] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:31.115 [2024-12-06 03:59:24.394610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:31.115 [2024-12-06 03:59:24.394656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:31.115 [2024-12-06 03:59:24.394828] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:31.115 [2024-12-06 03:59:24.395075] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.115 [2024-12-06 03:59:24.395209] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:31.115 [2024-12-06 03:59:24.395268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.115 [2024-12-06 03:59:24.406392] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:31.115 [2024-12-06 03:59:24.406481] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.115 [2024-12-06 03:59:24.406532] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:31.115 [2024-12-06 03:59:24.406563] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.115 [2024-12-06 03:59:24.408810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.115 [2024-12-06 03:59:24.408897] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:31.115 [2024-12-06 03:59:24.410634] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev d2c23bcf-0988-4369-8915-65384e3c256a 00:08:31.115 [2024-12-06 03:59:24.410781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d2c23bcf-0988-4369-8915-65384e3c256a is claimed 00:08:31.115 [2024-12-06 03:59:24.410939] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev cd4e1582-2a8c-4d77-86e9-3f22e62105a7 00:08:31.115 [2024-12-06 03:59:24.411002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev cd4e1582-2a8c-4d77-86e9-3f22e62105a7 is claimed 00:08:31.115 [2024-12-06 03:59:24.411254] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev cd4e1582-2a8c-4dpt0 00:08:31.115 77-86e9-3f22e62105a7 (2) smaller than existing raid bdev Raid (3) 00:08:31.115 [2024-12-06 03:59:24.411313] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev d2c23bcf-0988-4369-8915-65384e3c256a: File exists 00:08:31.115 [2024-12-06 03:59:24.411353] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:31.115 [2024-12-06 03:59:24.411365] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:31.115 [2024-12-06 03:59:24.411643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:31.115 [2024-12-06 03:59:24.411823] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:31.115 [2024-12-06 03:59:24.411832] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:31.115 [2024-12-06 03:59:24.411988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:08:31.115 [2024-12-06 03:59:24.430898] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.115 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.375 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:31.375 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:31.375 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:08:31.375 03:59:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60323 00:08:31.375 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60323 ']' 00:08:31.375 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60323 00:08:31.375 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:31.375 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:31.375 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60323 00:08:31.375 killing process with pid 60323 00:08:31.375 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:31.375 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:31.375 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60323' 00:08:31.375 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60323 00:08:31.375 [2024-12-06 03:59:24.510833] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:31.375 [2024-12-06 03:59:24.510896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.375 [2024-12-06 03:59:24.510942] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:31.375 [2024-12-06 03:59:24.510951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:31.375 03:59:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60323 00:08:32.752 [2024-12-06 03:59:25.924905] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:34.132 03:59:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:34.132 00:08:34.132 real 0m4.565s 00:08:34.132 user 0m4.777s 00:08:34.132 sys 0m0.525s 00:08:34.132 03:59:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.132 03:59:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.132 ************************************ 00:08:34.132 END TEST raid1_resize_superblock_test 00:08:34.132 ************************************ 00:08:34.132 03:59:27 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:08:34.132 03:59:27 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:08:34.132 03:59:27 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:08:34.132 03:59:27 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:08:34.132 03:59:27 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:08:34.132 03:59:27 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:08:34.132 03:59:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:34.132 03:59:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.132 03:59:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:34.132 ************************************ 00:08:34.132 START TEST raid_function_test_raid0 00:08:34.132 ************************************ 00:08:34.132 03:59:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:08:34.132 03:59:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:08:34.132 03:59:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:34.132 03:59:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:34.132 03:59:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60425 00:08:34.132 03:59:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:34.132 03:59:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60425' 00:08:34.132 Process raid pid: 60425 00:08:34.132 03:59:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60425 00:08:34.132 03:59:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60425 ']' 00:08:34.132 03:59:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.132 03:59:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.132 03:59:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.132 03:59:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.132 03:59:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:34.132 [2024-12-06 03:59:27.216590] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:08:34.132 [2024-12-06 03:59:27.216790] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.132 [2024-12-06 03:59:27.387230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.391 [2024-12-06 03:59:27.507275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.391 [2024-12-06 03:59:27.719863] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.391 [2024-12-06 03:59:27.720003] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.963 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.963 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:34.964 Base_1 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:34.964 Base_2 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:34.964 [2024-12-06 03:59:28.141331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:34.964 [2024-12-06 03:59:28.143325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:34.964 [2024-12-06 03:59:28.143441] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:34.964 [2024-12-06 03:59:28.143491] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:34.964 [2024-12-06 03:59:28.143758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:34.964 [2024-12-06 03:59:28.143941] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:34.964 [2024-12-06 03:59:28.143982] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:34.964 [2024-12-06 03:59:28.144209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:34.964 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:35.230 [2024-12-06 03:59:28.373012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:35.230 /dev/nbd0 00:08:35.230 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:35.230 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:35.230 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:35.230 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:08:35.230 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:35.230 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:35.230 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:35.230 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:08:35.230 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:35.230 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:35.230 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:35.230 1+0 records in 00:08:35.230 1+0 records out 00:08:35.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531159 s, 7.7 MB/s 00:08:35.230 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.230 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:08:35.230 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.230 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:35.230 03:59:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:08:35.230 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:35.230 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:35.230 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:35.230 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:35.230 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:35.489 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:35.489 { 00:08:35.489 "nbd_device": "/dev/nbd0", 00:08:35.489 "bdev_name": "raid" 00:08:35.489 } 00:08:35.489 ]' 00:08:35.489 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:35.490 { 00:08:35.490 "nbd_device": "/dev/nbd0", 00:08:35.490 "bdev_name": "raid" 00:08:35.490 } 00:08:35.490 ]' 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:35.490 4096+0 records in 00:08:35.490 4096+0 records out 00:08:35.490 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0326732 s, 64.2 MB/s 00:08:35.490 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:35.750 4096+0 records in 00:08:35.750 4096+0 records out 00:08:35.750 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.201658 s, 10.4 MB/s 00:08:35.750 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:35.750 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:35.750 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:35.750 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:35.750 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:35.750 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:35.750 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:35.750 128+0 records in 00:08:35.750 128+0 records out 00:08:35.750 65536 bytes (66 kB, 64 KiB) copied, 0.00122777 s, 53.4 MB/s 00:08:35.750 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:35.750 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:35.750 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:35.750 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:35.750 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:35.750 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:35.750 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:35.750 03:59:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:35.750 2035+0 records in 00:08:35.750 2035+0 records out 00:08:35.750 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0149958 s, 69.5 MB/s 00:08:35.750 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:35.750 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:35.750 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:35.750 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:35.750 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:35.750 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:35.750 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:35.750 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:35.750 456+0 records in 00:08:35.750 456+0 records out 00:08:35.750 233472 bytes (233 kB, 228 KiB) copied, 0.00382991 s, 61.0 MB/s 00:08:35.750 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:35.750 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:35.750 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:35.750 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:35.750 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:35.750 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:08:35.750 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:35.750 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:35.750 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:35.750 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:35.750 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:08:35.750 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:35.750 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:36.010 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:36.010 [2024-12-06 03:59:29.281780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.010 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:36.010 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:36.010 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.010 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.010 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:36.010 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:08:36.010 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.010 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:36.010 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:36.010 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:36.270 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:36.270 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:36.270 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:36.270 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:36.270 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:36.270 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:36.270 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:08:36.270 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:08:36.270 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:36.270 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:08:36.270 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:36.270 03:59:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60425 00:08:36.270 03:59:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60425 ']' 00:08:36.270 03:59:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60425 00:08:36.270 03:59:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:08:36.270 03:59:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.270 03:59:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60425 00:08:36.270 killing process with pid 60425 00:08:36.270 03:59:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.270 03:59:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.270 03:59:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60425' 00:08:36.270 03:59:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60425 00:08:36.270 [2024-12-06 03:59:29.594607] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:36.270 [2024-12-06 03:59:29.594710] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.270 [2024-12-06 03:59:29.594757] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.270 [2024-12-06 03:59:29.594771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:36.270 03:59:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60425 00:08:36.529 [2024-12-06 03:59:29.801411] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.912 03:59:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:08:37.912 00:08:37.912 real 0m3.799s 00:08:37.912 user 0m4.362s 00:08:37.912 sys 0m0.958s 00:08:37.912 03:59:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.912 03:59:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:37.912 ************************************ 00:08:37.912 END TEST raid_function_test_raid0 00:08:37.912 ************************************ 00:08:37.912 03:59:30 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:08:37.912 03:59:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:37.912 03:59:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.912 03:59:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.912 ************************************ 00:08:37.912 START TEST raid_function_test_concat 00:08:37.912 ************************************ 00:08:37.912 03:59:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:08:37.912 03:59:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:08:37.912 03:59:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:37.912 03:59:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:37.912 03:59:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60549 00:08:37.912 Process raid pid: 60549 00:08:37.912 03:59:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60549' 00:08:37.912 03:59:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:37.912 03:59:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60549 00:08:37.912 03:59:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60549 ']' 00:08:37.912 03:59:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.912 03:59:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.913 03:59:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.913 03:59:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.913 03:59:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:37.913 [2024-12-06 03:59:31.080703] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:08:37.913 [2024-12-06 03:59:31.080843] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.913 [2024-12-06 03:59:31.257914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.172 [2024-12-06 03:59:31.372275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.431 [2024-12-06 03:59:31.581203] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.431 [2024-12-06 03:59:31.581239] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.691 03:59:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.691 03:59:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:08:38.691 03:59:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:38.691 03:59:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.691 03:59:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:38.691 Base_1 00:08:38.691 03:59:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.691 03:59:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:38.691 03:59:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.691 03:59:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:38.691 Base_2 00:08:38.691 03:59:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.691 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:08:38.691 03:59:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.691 03:59:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:38.691 [2024-12-06 03:59:32.007473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:38.691 [2024-12-06 03:59:32.009335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:38.691 [2024-12-06 03:59:32.009426] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:38.691 [2024-12-06 03:59:32.009439] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:38.691 [2024-12-06 03:59:32.009726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:38.691 [2024-12-06 03:59:32.009928] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:38.691 [2024-12-06 03:59:32.009951] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:38.691 [2024-12-06 03:59:32.010164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.691 03:59:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.691 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:38.691 03:59:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.691 03:59:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:38.691 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:38.691 03:59:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.950 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:38.950 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:38.950 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:38.950 03:59:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:38.950 03:59:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:38.950 03:59:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:38.950 03:59:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:38.950 03:59:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:38.950 03:59:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:08:38.950 03:59:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:38.950 03:59:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:38.950 03:59:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:38.950 [2024-12-06 03:59:32.267097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:38.950 /dev/nbd0 00:08:39.209 03:59:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:39.209 03:59:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:39.209 03:59:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:39.209 03:59:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:08:39.209 03:59:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:39.209 03:59:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:39.209 03:59:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:39.209 03:59:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:08:39.209 03:59:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:39.209 03:59:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:39.209 03:59:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:39.209 1+0 records in 00:08:39.209 1+0 records out 00:08:39.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343698 s, 11.9 MB/s 00:08:39.209 03:59:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:39.209 03:59:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:08:39.209 03:59:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:39.209 03:59:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:39.209 03:59:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:08:39.209 03:59:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:39.209 03:59:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:39.209 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:39.209 03:59:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:39.209 03:59:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:39.468 { 00:08:39.468 "nbd_device": "/dev/nbd0", 00:08:39.468 "bdev_name": "raid" 00:08:39.468 } 00:08:39.468 ]' 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:39.468 { 00:08:39.468 "nbd_device": "/dev/nbd0", 00:08:39.468 "bdev_name": "raid" 00:08:39.468 } 00:08:39.468 ]' 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:39.468 4096+0 records in 00:08:39.468 4096+0 records out 00:08:39.468 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0336023 s, 62.4 MB/s 00:08:39.468 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:39.727 4096+0 records in 00:08:39.727 4096+0 records out 00:08:39.727 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.194789 s, 10.8 MB/s 00:08:39.727 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:39.727 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:39.727 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:39.727 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:39.727 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:39.727 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:39.727 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:39.727 128+0 records in 00:08:39.727 128+0 records out 00:08:39.727 65536 bytes (66 kB, 64 KiB) copied, 0.0010966 s, 59.8 MB/s 00:08:39.727 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:39.727 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:39.727 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:39.727 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:39.727 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:39.727 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:39.727 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:39.727 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:39.727 2035+0 records in 00:08:39.727 2035+0 records out 00:08:39.727 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0146529 s, 71.1 MB/s 00:08:39.727 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:39.727 03:59:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:39.727 03:59:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:39.727 03:59:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:39.727 03:59:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:39.727 03:59:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:39.727 03:59:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:39.727 03:59:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:39.727 456+0 records in 00:08:39.727 456+0 records out 00:08:39.727 233472 bytes (233 kB, 228 KiB) copied, 0.00377936 s, 61.8 MB/s 00:08:39.727 03:59:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:39.727 03:59:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:39.727 03:59:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:39.727 03:59:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:39.727 03:59:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:39.727 03:59:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:08:39.727 03:59:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:39.727 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:39.727 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:39.727 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:39.727 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:08:39.727 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:39.727 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:40.016 [2024-12-06 03:59:33.268015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.016 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:40.016 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:40.016 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:40.016 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:40.016 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:40.016 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:40.016 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:08:40.016 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:08:40.016 03:59:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:40.017 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:40.017 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:40.285 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:40.285 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:40.285 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:40.285 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:40.286 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:40.286 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:40.286 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:08:40.286 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:08:40.286 03:59:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:40.286 03:59:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:08:40.286 03:59:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:40.286 03:59:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60549 00:08:40.286 03:59:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60549 ']' 00:08:40.286 03:59:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60549 00:08:40.286 03:59:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:08:40.286 03:59:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.286 03:59:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60549 00:08:40.286 03:59:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:40.286 03:59:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:40.286 killing process with pid 60549 00:08:40.286 03:59:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60549' 00:08:40.286 03:59:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60549 00:08:40.286 [2024-12-06 03:59:33.589171] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:40.286 03:59:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60549 00:08:40.286 [2024-12-06 03:59:33.589284] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.286 [2024-12-06 03:59:33.589353] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:40.286 [2024-12-06 03:59:33.589365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:40.545 [2024-12-06 03:59:33.798024] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:41.929 03:59:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:08:41.929 00:08:41.929 real 0m3.937s 00:08:41.929 user 0m4.628s 00:08:41.929 sys 0m0.949s 00:08:41.929 03:59:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.929 03:59:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:41.929 ************************************ 00:08:41.929 END TEST raid_function_test_concat 00:08:41.929 ************************************ 00:08:41.929 03:59:34 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:08:41.929 03:59:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:41.929 03:59:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.929 03:59:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:41.929 ************************************ 00:08:41.929 START TEST raid0_resize_test 00:08:41.929 ************************************ 00:08:41.929 03:59:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:08:41.929 03:59:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:08:41.929 03:59:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:41.929 03:59:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:41.929 03:59:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:41.929 03:59:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:41.929 03:59:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:41.929 03:59:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:41.929 03:59:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:41.929 03:59:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60677 00:08:41.929 03:59:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:41.929 03:59:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60677' 00:08:41.929 Process raid pid: 60677 00:08:41.929 03:59:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60677 00:08:41.929 03:59:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60677 ']' 00:08:41.929 03:59:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.929 03:59:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.929 03:59:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.929 03:59:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.929 03:59:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.929 [2024-12-06 03:59:35.088200] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:08:41.929 [2024-12-06 03:59:35.088329] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.929 [2024-12-06 03:59:35.261470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.189 [2024-12-06 03:59:35.371941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.448 [2024-12-06 03:59:35.580262] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.448 [2024-12-06 03:59:35.580308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.707 03:59:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.707 03:59:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:08:42.707 03:59:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:42.707 03:59:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.707 03:59:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.707 Base_1 00:08:42.707 03:59:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.707 03:59:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:42.707 03:59:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.707 03:59:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.707 Base_2 00:08:42.707 03:59:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.707 03:59:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:08:42.707 03:59:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:42.707 03:59:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.707 03:59:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.707 [2024-12-06 03:59:35.945163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:42.707 [2024-12-06 03:59:35.947048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:42.707 [2024-12-06 03:59:35.947128] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:42.707 [2024-12-06 03:59:35.947141] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:42.708 [2024-12-06 03:59:35.947425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:42.708 [2024-12-06 03:59:35.947559] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:42.708 [2024-12-06 03:59:35.947568] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:42.708 [2024-12-06 03:59:35.947726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.708 03:59:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.708 03:59:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:42.708 03:59:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.708 03:59:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.708 [2024-12-06 03:59:35.957098] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:42.708 [2024-12-06 03:59:35.957129] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:42.708 true 00:08:42.708 03:59:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.708 03:59:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:42.708 03:59:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:42.708 03:59:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.708 03:59:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.708 [2024-12-06 03:59:35.973241] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.708 03:59:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.708 03:59:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:08:42.708 03:59:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:08:42.708 03:59:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:08:42.708 03:59:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:08:42.708 03:59:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:08:42.708 03:59:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:42.708 03:59:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.708 03:59:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.708 [2024-12-06 03:59:36.020976] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:42.708 [2024-12-06 03:59:36.021008] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:42.708 [2024-12-06 03:59:36.021070] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:08:42.708 true 00:08:42.708 03:59:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.708 03:59:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:42.708 03:59:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:42.708 03:59:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.708 03:59:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.708 [2024-12-06 03:59:36.037170] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.708 03:59:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.967 03:59:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:08:42.967 03:59:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:08:42.967 03:59:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:08:42.967 03:59:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:08:42.967 03:59:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:08:42.967 03:59:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60677 00:08:42.967 03:59:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60677 ']' 00:08:42.967 03:59:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60677 00:08:42.967 03:59:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:08:42.967 03:59:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.967 03:59:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60677 00:08:42.967 03:59:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.967 03:59:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.967 killing process with pid 60677 00:08:42.967 03:59:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60677' 00:08:42.967 03:59:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60677 00:08:42.967 [2024-12-06 03:59:36.101532] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:42.967 [2024-12-06 03:59:36.101632] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.967 [2024-12-06 03:59:36.101688] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.967 [2024-12-06 03:59:36.101702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:42.967 03:59:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60677 00:08:42.967 [2024-12-06 03:59:36.121876] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:43.904 03:59:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:43.904 00:08:43.904 real 0m2.245s 00:08:43.904 user 0m2.375s 00:08:43.904 sys 0m0.332s 00:08:43.904 03:59:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.905 03:59:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.905 ************************************ 00:08:43.905 END TEST raid0_resize_test 00:08:43.905 ************************************ 00:08:44.164 03:59:37 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:08:44.164 03:59:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:44.164 03:59:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.164 03:59:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:44.164 ************************************ 00:08:44.164 START TEST raid1_resize_test 00:08:44.164 ************************************ 00:08:44.164 03:59:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:08:44.164 03:59:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:08:44.164 03:59:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:44.164 03:59:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:44.164 03:59:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:44.164 03:59:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:44.164 03:59:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:44.164 03:59:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:44.164 03:59:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:44.164 03:59:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60733 00:08:44.164 Process raid pid: 60733 00:08:44.164 03:59:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:44.164 03:59:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60733' 00:08:44.164 03:59:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60733 00:08:44.164 03:59:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60733 ']' 00:08:44.164 03:59:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.164 03:59:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.164 03:59:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.164 03:59:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.164 03:59:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.164 [2024-12-06 03:59:37.396181] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:08:44.164 [2024-12-06 03:59:37.396300] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.428 [2024-12-06 03:59:37.568565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.428 [2024-12-06 03:59:37.682897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.694 [2024-12-06 03:59:37.891495] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.694 [2024-12-06 03:59:37.891545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.954 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.955 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:08:44.955 03:59:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:44.955 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.955 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.955 Base_1 00:08:44.955 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.955 03:59:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:44.955 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.955 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.955 Base_2 00:08:44.955 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.955 03:59:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:08:44.955 03:59:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:44.955 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.955 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.955 [2024-12-06 03:59:38.247946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:44.955 [2024-12-06 03:59:38.249745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:44.955 [2024-12-06 03:59:38.249811] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:44.955 [2024-12-06 03:59:38.249824] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:44.955 [2024-12-06 03:59:38.250082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:44.955 [2024-12-06 03:59:38.250215] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:44.955 [2024-12-06 03:59:38.250227] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:44.955 [2024-12-06 03:59:38.250374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.955 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.955 03:59:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:44.955 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.955 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.955 [2024-12-06 03:59:38.259920] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:44.955 [2024-12-06 03:59:38.259970] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:44.955 true 00:08:44.955 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.955 03:59:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:44.955 03:59:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:44.955 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.955 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.955 [2024-12-06 03:59:38.276188] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.955 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.215 [2024-12-06 03:59:38.319847] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:45.215 [2024-12-06 03:59:38.319882] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:45.215 [2024-12-06 03:59:38.319914] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:08:45.215 true 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.215 [2024-12-06 03:59:38.335937] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60733 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60733 ']' 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60733 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60733 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:45.215 killing process with pid 60733 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60733' 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60733 00:08:45.215 [2024-12-06 03:59:38.409083] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:45.215 [2024-12-06 03:59:38.409178] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.215 03:59:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60733 00:08:45.215 [2024-12-06 03:59:38.409673] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.215 [2024-12-06 03:59:38.409702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:45.215 [2024-12-06 03:59:38.427108] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:46.597 03:59:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:46.597 00:08:46.597 real 0m2.251s 00:08:46.597 user 0m2.364s 00:08:46.597 sys 0m0.349s 00:08:46.597 03:59:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.597 03:59:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.597 ************************************ 00:08:46.597 END TEST raid1_resize_test 00:08:46.597 ************************************ 00:08:46.597 03:59:39 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:46.597 03:59:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:46.597 03:59:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:08:46.597 03:59:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:46.597 03:59:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.597 03:59:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:46.597 ************************************ 00:08:46.597 START TEST raid_state_function_test 00:08:46.597 ************************************ 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60790 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60790' 00:08:46.597 Process raid pid: 60790 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60790 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60790 ']' 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.597 03:59:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.597 [2024-12-06 03:59:39.720136] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:08:46.597 [2024-12-06 03:59:39.720372] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.597 [2024-12-06 03:59:39.895070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.855 [2024-12-06 03:59:40.008093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.855 [2024-12-06 03:59:40.203554] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.855 [2024-12-06 03:59:40.203689] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.423 [2024-12-06 03:59:40.561353] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.423 [2024-12-06 03:59:40.561476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.423 [2024-12-06 03:59:40.561510] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.423 [2024-12-06 03:59:40.561535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.423 "name": "Existed_Raid", 00:08:47.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.423 "strip_size_kb": 64, 00:08:47.423 "state": "configuring", 00:08:47.423 "raid_level": "raid0", 00:08:47.423 "superblock": false, 00:08:47.423 "num_base_bdevs": 2, 00:08:47.423 "num_base_bdevs_discovered": 0, 00:08:47.423 "num_base_bdevs_operational": 2, 00:08:47.423 "base_bdevs_list": [ 00:08:47.423 { 00:08:47.423 "name": "BaseBdev1", 00:08:47.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.423 "is_configured": false, 00:08:47.423 "data_offset": 0, 00:08:47.423 "data_size": 0 00:08:47.423 }, 00:08:47.423 { 00:08:47.423 "name": "BaseBdev2", 00:08:47.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.423 "is_configured": false, 00:08:47.423 "data_offset": 0, 00:08:47.423 "data_size": 0 00:08:47.423 } 00:08:47.423 ] 00:08:47.423 }' 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.423 03:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.682 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:47.682 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.682 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.682 [2024-12-06 03:59:41.020505] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:47.682 [2024-12-06 03:59:41.020588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:47.682 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.682 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:47.682 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.682 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.682 [2024-12-06 03:59:41.028478] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.682 [2024-12-06 03:59:41.028558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.682 [2024-12-06 03:59:41.028590] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.682 [2024-12-06 03:59:41.028615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.682 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.682 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:47.682 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.682 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.940 [2024-12-06 03:59:41.071583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.940 BaseBdev1 00:08:47.940 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.940 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:47.940 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:47.940 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.940 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:47.940 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.940 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.940 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:47.940 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.940 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.940 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.940 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:47.940 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.940 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.940 [ 00:08:47.940 { 00:08:47.940 "name": "BaseBdev1", 00:08:47.940 "aliases": [ 00:08:47.940 "5555cd26-be7a-486a-ad74-1452c236d340" 00:08:47.940 ], 00:08:47.940 "product_name": "Malloc disk", 00:08:47.940 "block_size": 512, 00:08:47.940 "num_blocks": 65536, 00:08:47.940 "uuid": "5555cd26-be7a-486a-ad74-1452c236d340", 00:08:47.940 "assigned_rate_limits": { 00:08:47.940 "rw_ios_per_sec": 0, 00:08:47.940 "rw_mbytes_per_sec": 0, 00:08:47.940 "r_mbytes_per_sec": 0, 00:08:47.941 "w_mbytes_per_sec": 0 00:08:47.941 }, 00:08:47.941 "claimed": true, 00:08:47.941 "claim_type": "exclusive_write", 00:08:47.941 "zoned": false, 00:08:47.941 "supported_io_types": { 00:08:47.941 "read": true, 00:08:47.941 "write": true, 00:08:47.941 "unmap": true, 00:08:47.941 "flush": true, 00:08:47.941 "reset": true, 00:08:47.941 "nvme_admin": false, 00:08:47.941 "nvme_io": false, 00:08:47.941 "nvme_io_md": false, 00:08:47.941 "write_zeroes": true, 00:08:47.941 "zcopy": true, 00:08:47.941 "get_zone_info": false, 00:08:47.941 "zone_management": false, 00:08:47.941 "zone_append": false, 00:08:47.941 "compare": false, 00:08:47.941 "compare_and_write": false, 00:08:47.941 "abort": true, 00:08:47.941 "seek_hole": false, 00:08:47.941 "seek_data": false, 00:08:47.941 "copy": true, 00:08:47.941 "nvme_iov_md": false 00:08:47.941 }, 00:08:47.941 "memory_domains": [ 00:08:47.941 { 00:08:47.941 "dma_device_id": "system", 00:08:47.941 "dma_device_type": 1 00:08:47.941 }, 00:08:47.941 { 00:08:47.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.941 "dma_device_type": 2 00:08:47.941 } 00:08:47.941 ], 00:08:47.941 "driver_specific": {} 00:08:47.941 } 00:08:47.941 ] 00:08:47.941 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.941 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:47.941 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:47.941 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.941 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.941 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.941 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.941 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.941 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.941 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.941 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.941 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.941 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.941 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.941 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.941 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.941 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.941 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.941 "name": "Existed_Raid", 00:08:47.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.941 "strip_size_kb": 64, 00:08:47.941 "state": "configuring", 00:08:47.941 "raid_level": "raid0", 00:08:47.941 "superblock": false, 00:08:47.941 "num_base_bdevs": 2, 00:08:47.941 "num_base_bdevs_discovered": 1, 00:08:47.941 "num_base_bdevs_operational": 2, 00:08:47.941 "base_bdevs_list": [ 00:08:47.941 { 00:08:47.941 "name": "BaseBdev1", 00:08:47.941 "uuid": "5555cd26-be7a-486a-ad74-1452c236d340", 00:08:47.941 "is_configured": true, 00:08:47.941 "data_offset": 0, 00:08:47.941 "data_size": 65536 00:08:47.941 }, 00:08:47.941 { 00:08:47.941 "name": "BaseBdev2", 00:08:47.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.941 "is_configured": false, 00:08:47.941 "data_offset": 0, 00:08:47.941 "data_size": 0 00:08:47.941 } 00:08:47.941 ] 00:08:47.941 }' 00:08:47.941 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.941 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.199 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:48.199 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.199 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.200 [2024-12-06 03:59:41.498891] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:48.200 [2024-12-06 03:59:41.498984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:48.200 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.200 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:48.200 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.200 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.200 [2024-12-06 03:59:41.510907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.200 [2024-12-06 03:59:41.512773] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.200 [2024-12-06 03:59:41.512858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.200 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.200 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:48.200 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.200 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:48.200 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.200 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.200 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.200 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.200 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.200 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.200 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.200 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.200 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.200 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.200 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.200 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.200 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.200 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.459 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.459 "name": "Existed_Raid", 00:08:48.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.459 "strip_size_kb": 64, 00:08:48.459 "state": "configuring", 00:08:48.459 "raid_level": "raid0", 00:08:48.459 "superblock": false, 00:08:48.459 "num_base_bdevs": 2, 00:08:48.459 "num_base_bdevs_discovered": 1, 00:08:48.459 "num_base_bdevs_operational": 2, 00:08:48.459 "base_bdevs_list": [ 00:08:48.459 { 00:08:48.459 "name": "BaseBdev1", 00:08:48.459 "uuid": "5555cd26-be7a-486a-ad74-1452c236d340", 00:08:48.459 "is_configured": true, 00:08:48.459 "data_offset": 0, 00:08:48.459 "data_size": 65536 00:08:48.459 }, 00:08:48.459 { 00:08:48.459 "name": "BaseBdev2", 00:08:48.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.459 "is_configured": false, 00:08:48.459 "data_offset": 0, 00:08:48.459 "data_size": 0 00:08:48.459 } 00:08:48.459 ] 00:08:48.459 }' 00:08:48.459 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.459 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.718 03:59:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:48.718 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.719 03:59:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.719 [2024-12-06 03:59:42.024956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.719 [2024-12-06 03:59:42.025079] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:48.719 [2024-12-06 03:59:42.025118] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:48.719 [2024-12-06 03:59:42.025406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:48.719 [2024-12-06 03:59:42.025623] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:48.719 [2024-12-06 03:59:42.025670] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:48.719 [2024-12-06 03:59:42.025972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.719 BaseBdev2 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.719 [ 00:08:48.719 { 00:08:48.719 "name": "BaseBdev2", 00:08:48.719 "aliases": [ 00:08:48.719 "44cd86b1-1863-4e71-977f-88e9a43cca8f" 00:08:48.719 ], 00:08:48.719 "product_name": "Malloc disk", 00:08:48.719 "block_size": 512, 00:08:48.719 "num_blocks": 65536, 00:08:48.719 "uuid": "44cd86b1-1863-4e71-977f-88e9a43cca8f", 00:08:48.719 "assigned_rate_limits": { 00:08:48.719 "rw_ios_per_sec": 0, 00:08:48.719 "rw_mbytes_per_sec": 0, 00:08:48.719 "r_mbytes_per_sec": 0, 00:08:48.719 "w_mbytes_per_sec": 0 00:08:48.719 }, 00:08:48.719 "claimed": true, 00:08:48.719 "claim_type": "exclusive_write", 00:08:48.719 "zoned": false, 00:08:48.719 "supported_io_types": { 00:08:48.719 "read": true, 00:08:48.719 "write": true, 00:08:48.719 "unmap": true, 00:08:48.719 "flush": true, 00:08:48.719 "reset": true, 00:08:48.719 "nvme_admin": false, 00:08:48.719 "nvme_io": false, 00:08:48.719 "nvme_io_md": false, 00:08:48.719 "write_zeroes": true, 00:08:48.719 "zcopy": true, 00:08:48.719 "get_zone_info": false, 00:08:48.719 "zone_management": false, 00:08:48.719 "zone_append": false, 00:08:48.719 "compare": false, 00:08:48.719 "compare_and_write": false, 00:08:48.719 "abort": true, 00:08:48.719 "seek_hole": false, 00:08:48.719 "seek_data": false, 00:08:48.719 "copy": true, 00:08:48.719 "nvme_iov_md": false 00:08:48.719 }, 00:08:48.719 "memory_domains": [ 00:08:48.719 { 00:08:48.719 "dma_device_id": "system", 00:08:48.719 "dma_device_type": 1 00:08:48.719 }, 00:08:48.719 { 00:08:48.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.719 "dma_device_type": 2 00:08:48.719 } 00:08:48.719 ], 00:08:48.719 "driver_specific": {} 00:08:48.719 } 00:08:48.719 ] 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.719 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.979 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.979 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.979 "name": "Existed_Raid", 00:08:48.979 "uuid": "0a5b40bd-ecaa-4273-a483-c668faedd894", 00:08:48.979 "strip_size_kb": 64, 00:08:48.979 "state": "online", 00:08:48.979 "raid_level": "raid0", 00:08:48.979 "superblock": false, 00:08:48.979 "num_base_bdevs": 2, 00:08:48.979 "num_base_bdevs_discovered": 2, 00:08:48.979 "num_base_bdevs_operational": 2, 00:08:48.979 "base_bdevs_list": [ 00:08:48.979 { 00:08:48.979 "name": "BaseBdev1", 00:08:48.979 "uuid": "5555cd26-be7a-486a-ad74-1452c236d340", 00:08:48.979 "is_configured": true, 00:08:48.979 "data_offset": 0, 00:08:48.979 "data_size": 65536 00:08:48.979 }, 00:08:48.979 { 00:08:48.979 "name": "BaseBdev2", 00:08:48.979 "uuid": "44cd86b1-1863-4e71-977f-88e9a43cca8f", 00:08:48.979 "is_configured": true, 00:08:48.979 "data_offset": 0, 00:08:48.979 "data_size": 65536 00:08:48.979 } 00:08:48.979 ] 00:08:48.979 }' 00:08:48.979 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.979 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.239 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:49.239 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:49.239 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:49.239 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:49.239 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:49.239 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:49.239 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:49.239 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.239 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.239 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:49.239 [2024-12-06 03:59:42.480554] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.239 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.239 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:49.239 "name": "Existed_Raid", 00:08:49.239 "aliases": [ 00:08:49.239 "0a5b40bd-ecaa-4273-a483-c668faedd894" 00:08:49.239 ], 00:08:49.239 "product_name": "Raid Volume", 00:08:49.239 "block_size": 512, 00:08:49.239 "num_blocks": 131072, 00:08:49.239 "uuid": "0a5b40bd-ecaa-4273-a483-c668faedd894", 00:08:49.239 "assigned_rate_limits": { 00:08:49.239 "rw_ios_per_sec": 0, 00:08:49.239 "rw_mbytes_per_sec": 0, 00:08:49.239 "r_mbytes_per_sec": 0, 00:08:49.239 "w_mbytes_per_sec": 0 00:08:49.239 }, 00:08:49.239 "claimed": false, 00:08:49.239 "zoned": false, 00:08:49.239 "supported_io_types": { 00:08:49.239 "read": true, 00:08:49.239 "write": true, 00:08:49.239 "unmap": true, 00:08:49.239 "flush": true, 00:08:49.239 "reset": true, 00:08:49.239 "nvme_admin": false, 00:08:49.239 "nvme_io": false, 00:08:49.239 "nvme_io_md": false, 00:08:49.239 "write_zeroes": true, 00:08:49.239 "zcopy": false, 00:08:49.239 "get_zone_info": false, 00:08:49.239 "zone_management": false, 00:08:49.239 "zone_append": false, 00:08:49.239 "compare": false, 00:08:49.239 "compare_and_write": false, 00:08:49.239 "abort": false, 00:08:49.239 "seek_hole": false, 00:08:49.239 "seek_data": false, 00:08:49.239 "copy": false, 00:08:49.239 "nvme_iov_md": false 00:08:49.239 }, 00:08:49.239 "memory_domains": [ 00:08:49.239 { 00:08:49.239 "dma_device_id": "system", 00:08:49.239 "dma_device_type": 1 00:08:49.239 }, 00:08:49.239 { 00:08:49.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.239 "dma_device_type": 2 00:08:49.239 }, 00:08:49.239 { 00:08:49.239 "dma_device_id": "system", 00:08:49.239 "dma_device_type": 1 00:08:49.239 }, 00:08:49.239 { 00:08:49.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.239 "dma_device_type": 2 00:08:49.239 } 00:08:49.239 ], 00:08:49.239 "driver_specific": { 00:08:49.239 "raid": { 00:08:49.239 "uuid": "0a5b40bd-ecaa-4273-a483-c668faedd894", 00:08:49.239 "strip_size_kb": 64, 00:08:49.239 "state": "online", 00:08:49.239 "raid_level": "raid0", 00:08:49.239 "superblock": false, 00:08:49.239 "num_base_bdevs": 2, 00:08:49.239 "num_base_bdevs_discovered": 2, 00:08:49.239 "num_base_bdevs_operational": 2, 00:08:49.239 "base_bdevs_list": [ 00:08:49.239 { 00:08:49.239 "name": "BaseBdev1", 00:08:49.239 "uuid": "5555cd26-be7a-486a-ad74-1452c236d340", 00:08:49.239 "is_configured": true, 00:08:49.239 "data_offset": 0, 00:08:49.239 "data_size": 65536 00:08:49.239 }, 00:08:49.239 { 00:08:49.239 "name": "BaseBdev2", 00:08:49.239 "uuid": "44cd86b1-1863-4e71-977f-88e9a43cca8f", 00:08:49.239 "is_configured": true, 00:08:49.239 "data_offset": 0, 00:08:49.239 "data_size": 65536 00:08:49.239 } 00:08:49.239 ] 00:08:49.239 } 00:08:49.239 } 00:08:49.239 }' 00:08:49.239 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:49.499 BaseBdev2' 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.499 [2024-12-06 03:59:42.743843] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:49.499 [2024-12-06 03:59:42.743928] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.499 [2024-12-06 03:59:42.743986] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:49.499 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:49.500 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:49.500 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.500 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:49.500 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.500 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.500 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:49.500 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.500 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.500 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.500 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.759 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.759 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.759 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.759 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.759 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.759 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.759 "name": "Existed_Raid", 00:08:49.759 "uuid": "0a5b40bd-ecaa-4273-a483-c668faedd894", 00:08:49.759 "strip_size_kb": 64, 00:08:49.759 "state": "offline", 00:08:49.759 "raid_level": "raid0", 00:08:49.759 "superblock": false, 00:08:49.759 "num_base_bdevs": 2, 00:08:49.759 "num_base_bdevs_discovered": 1, 00:08:49.759 "num_base_bdevs_operational": 1, 00:08:49.759 "base_bdevs_list": [ 00:08:49.759 { 00:08:49.759 "name": null, 00:08:49.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.759 "is_configured": false, 00:08:49.759 "data_offset": 0, 00:08:49.759 "data_size": 65536 00:08:49.759 }, 00:08:49.759 { 00:08:49.759 "name": "BaseBdev2", 00:08:49.759 "uuid": "44cd86b1-1863-4e71-977f-88e9a43cca8f", 00:08:49.759 "is_configured": true, 00:08:49.759 "data_offset": 0, 00:08:49.759 "data_size": 65536 00:08:49.759 } 00:08:49.759 ] 00:08:49.759 }' 00:08:49.759 03:59:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.759 03:59:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.019 03:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:50.019 03:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.019 03:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.019 03:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.019 03:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.019 03:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:50.019 03:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.019 03:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:50.019 03:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:50.019 03:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:50.019 03:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.020 03:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.020 [2024-12-06 03:59:43.338576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:50.020 [2024-12-06 03:59:43.338685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:50.279 03:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.279 03:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:50.279 03:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.279 03:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.279 03:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.279 03:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.279 03:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:50.279 03:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.279 03:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:50.279 03:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:50.279 03:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:50.279 03:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60790 00:08:50.279 03:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60790 ']' 00:08:50.279 03:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60790 00:08:50.279 03:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:50.279 03:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.279 03:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60790 00:08:50.279 killing process with pid 60790 00:08:50.279 03:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.279 03:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.279 03:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60790' 00:08:50.279 03:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60790 00:08:50.279 [2024-12-06 03:59:43.522391] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.279 03:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60790 00:08:50.279 [2024-12-06 03:59:43.538129] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:51.660 00:08:51.660 real 0m5.031s 00:08:51.660 user 0m7.294s 00:08:51.660 sys 0m0.767s 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.660 ************************************ 00:08:51.660 END TEST raid_state_function_test 00:08:51.660 ************************************ 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.660 03:59:44 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:08:51.660 03:59:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:51.660 03:59:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.660 03:59:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:51.660 ************************************ 00:08:51.660 START TEST raid_state_function_test_sb 00:08:51.660 ************************************ 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61043 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:51.660 Process raid pid: 61043 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61043' 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61043 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61043 ']' 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.660 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.660 [2024-12-06 03:59:44.815023] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:08:51.660 [2024-12-06 03:59:44.815231] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.660 [2024-12-06 03:59:44.987989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.921 [2024-12-06 03:59:45.102311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.179 [2024-12-06 03:59:45.307932] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.180 [2024-12-06 03:59:45.308018] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.439 [2024-12-06 03:59:45.648535] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:52.439 [2024-12-06 03:59:45.648593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:52.439 [2024-12-06 03:59:45.648604] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:52.439 [2024-12-06 03:59:45.648614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.439 "name": "Existed_Raid", 00:08:52.439 "uuid": "e5e95305-1a6f-47a1-bb07-a448cf677ea2", 00:08:52.439 "strip_size_kb": 64, 00:08:52.439 "state": "configuring", 00:08:52.439 "raid_level": "raid0", 00:08:52.439 "superblock": true, 00:08:52.439 "num_base_bdevs": 2, 00:08:52.439 "num_base_bdevs_discovered": 0, 00:08:52.439 "num_base_bdevs_operational": 2, 00:08:52.439 "base_bdevs_list": [ 00:08:52.439 { 00:08:52.439 "name": "BaseBdev1", 00:08:52.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.439 "is_configured": false, 00:08:52.439 "data_offset": 0, 00:08:52.439 "data_size": 0 00:08:52.439 }, 00:08:52.439 { 00:08:52.439 "name": "BaseBdev2", 00:08:52.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.439 "is_configured": false, 00:08:52.439 "data_offset": 0, 00:08:52.439 "data_size": 0 00:08:52.439 } 00:08:52.439 ] 00:08:52.439 }' 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.439 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.009 [2024-12-06 03:59:46.079820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.009 [2024-12-06 03:59:46.079914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.009 [2024-12-06 03:59:46.087780] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.009 [2024-12-06 03:59:46.087863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.009 [2024-12-06 03:59:46.087892] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.009 [2024-12-06 03:59:46.087918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.009 [2024-12-06 03:59:46.132331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.009 BaseBdev1 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.009 [ 00:08:53.009 { 00:08:53.009 "name": "BaseBdev1", 00:08:53.009 "aliases": [ 00:08:53.009 "9dc88af4-3a13-4994-a209-0c3b231eb872" 00:08:53.009 ], 00:08:53.009 "product_name": "Malloc disk", 00:08:53.009 "block_size": 512, 00:08:53.009 "num_blocks": 65536, 00:08:53.009 "uuid": "9dc88af4-3a13-4994-a209-0c3b231eb872", 00:08:53.009 "assigned_rate_limits": { 00:08:53.009 "rw_ios_per_sec": 0, 00:08:53.009 "rw_mbytes_per_sec": 0, 00:08:53.009 "r_mbytes_per_sec": 0, 00:08:53.009 "w_mbytes_per_sec": 0 00:08:53.009 }, 00:08:53.009 "claimed": true, 00:08:53.009 "claim_type": "exclusive_write", 00:08:53.009 "zoned": false, 00:08:53.009 "supported_io_types": { 00:08:53.009 "read": true, 00:08:53.009 "write": true, 00:08:53.009 "unmap": true, 00:08:53.009 "flush": true, 00:08:53.009 "reset": true, 00:08:53.009 "nvme_admin": false, 00:08:53.009 "nvme_io": false, 00:08:53.009 "nvme_io_md": false, 00:08:53.009 "write_zeroes": true, 00:08:53.009 "zcopy": true, 00:08:53.009 "get_zone_info": false, 00:08:53.009 "zone_management": false, 00:08:53.009 "zone_append": false, 00:08:53.009 "compare": false, 00:08:53.009 "compare_and_write": false, 00:08:53.009 "abort": true, 00:08:53.009 "seek_hole": false, 00:08:53.009 "seek_data": false, 00:08:53.009 "copy": true, 00:08:53.009 "nvme_iov_md": false 00:08:53.009 }, 00:08:53.009 "memory_domains": [ 00:08:53.009 { 00:08:53.009 "dma_device_id": "system", 00:08:53.009 "dma_device_type": 1 00:08:53.009 }, 00:08:53.009 { 00:08:53.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.009 "dma_device_type": 2 00:08:53.009 } 00:08:53.009 ], 00:08:53.009 "driver_specific": {} 00:08:53.009 } 00:08:53.009 ] 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.009 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.009 "name": "Existed_Raid", 00:08:53.009 "uuid": "7f50f0b5-269e-42e3-a8fa-c2fc7e91d70c", 00:08:53.009 "strip_size_kb": 64, 00:08:53.009 "state": "configuring", 00:08:53.009 "raid_level": "raid0", 00:08:53.010 "superblock": true, 00:08:53.010 "num_base_bdevs": 2, 00:08:53.010 "num_base_bdevs_discovered": 1, 00:08:53.010 "num_base_bdevs_operational": 2, 00:08:53.010 "base_bdevs_list": [ 00:08:53.010 { 00:08:53.010 "name": "BaseBdev1", 00:08:53.010 "uuid": "9dc88af4-3a13-4994-a209-0c3b231eb872", 00:08:53.010 "is_configured": true, 00:08:53.010 "data_offset": 2048, 00:08:53.010 "data_size": 63488 00:08:53.010 }, 00:08:53.010 { 00:08:53.010 "name": "BaseBdev2", 00:08:53.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.010 "is_configured": false, 00:08:53.010 "data_offset": 0, 00:08:53.010 "data_size": 0 00:08:53.010 } 00:08:53.010 ] 00:08:53.010 }' 00:08:53.010 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.010 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.279 [2024-12-06 03:59:46.527725] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.279 [2024-12-06 03:59:46.527819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.279 [2024-12-06 03:59:46.535763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.279 [2024-12-06 03:59:46.537668] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.279 [2024-12-06 03:59:46.537749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.279 "name": "Existed_Raid", 00:08:53.279 "uuid": "92499e4c-a622-4fa1-8d9b-e90fade87db5", 00:08:53.279 "strip_size_kb": 64, 00:08:53.279 "state": "configuring", 00:08:53.279 "raid_level": "raid0", 00:08:53.279 "superblock": true, 00:08:53.279 "num_base_bdevs": 2, 00:08:53.279 "num_base_bdevs_discovered": 1, 00:08:53.279 "num_base_bdevs_operational": 2, 00:08:53.279 "base_bdevs_list": [ 00:08:53.279 { 00:08:53.279 "name": "BaseBdev1", 00:08:53.279 "uuid": "9dc88af4-3a13-4994-a209-0c3b231eb872", 00:08:53.279 "is_configured": true, 00:08:53.279 "data_offset": 2048, 00:08:53.279 "data_size": 63488 00:08:53.279 }, 00:08:53.279 { 00:08:53.279 "name": "BaseBdev2", 00:08:53.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.279 "is_configured": false, 00:08:53.279 "data_offset": 0, 00:08:53.279 "data_size": 0 00:08:53.279 } 00:08:53.279 ] 00:08:53.279 }' 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.279 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.863 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:53.863 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.863 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.863 [2024-12-06 03:59:46.984550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.863 [2024-12-06 03:59:46.984799] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:53.863 [2024-12-06 03:59:46.984819] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:53.863 BaseBdev2 00:08:53.863 [2024-12-06 03:59:46.985134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:53.863 [2024-12-06 03:59:46.985304] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:53.863 [2024-12-06 03:59:46.985321] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:53.863 [2024-12-06 03:59:46.985463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.863 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.863 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:53.863 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:53.863 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.863 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:53.863 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.863 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.863 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:53.863 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.863 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.863 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.863 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:53.863 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.863 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.863 [ 00:08:53.863 { 00:08:53.863 "name": "BaseBdev2", 00:08:53.863 "aliases": [ 00:08:53.863 "ee7faf1e-4553-42be-abb5-ca9c8d5e47da" 00:08:53.863 ], 00:08:53.863 "product_name": "Malloc disk", 00:08:53.863 "block_size": 512, 00:08:53.863 "num_blocks": 65536, 00:08:53.863 "uuid": "ee7faf1e-4553-42be-abb5-ca9c8d5e47da", 00:08:53.863 "assigned_rate_limits": { 00:08:53.863 "rw_ios_per_sec": 0, 00:08:53.863 "rw_mbytes_per_sec": 0, 00:08:53.863 "r_mbytes_per_sec": 0, 00:08:53.863 "w_mbytes_per_sec": 0 00:08:53.863 }, 00:08:53.863 "claimed": true, 00:08:53.863 "claim_type": "exclusive_write", 00:08:53.863 "zoned": false, 00:08:53.863 "supported_io_types": { 00:08:53.863 "read": true, 00:08:53.863 "write": true, 00:08:53.863 "unmap": true, 00:08:53.863 "flush": true, 00:08:53.863 "reset": true, 00:08:53.863 "nvme_admin": false, 00:08:53.863 "nvme_io": false, 00:08:53.863 "nvme_io_md": false, 00:08:53.863 "write_zeroes": true, 00:08:53.863 "zcopy": true, 00:08:53.863 "get_zone_info": false, 00:08:53.863 "zone_management": false, 00:08:53.863 "zone_append": false, 00:08:53.863 "compare": false, 00:08:53.863 "compare_and_write": false, 00:08:53.863 "abort": true, 00:08:53.863 "seek_hole": false, 00:08:53.863 "seek_data": false, 00:08:53.863 "copy": true, 00:08:53.863 "nvme_iov_md": false 00:08:53.863 }, 00:08:53.863 "memory_domains": [ 00:08:53.863 { 00:08:53.863 "dma_device_id": "system", 00:08:53.863 "dma_device_type": 1 00:08:53.863 }, 00:08:53.863 { 00:08:53.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.863 "dma_device_type": 2 00:08:53.863 } 00:08:53.863 ], 00:08:53.863 "driver_specific": {} 00:08:53.863 } 00:08:53.863 ] 00:08:53.863 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.863 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:53.863 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:53.863 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:53.863 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:53.863 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.863 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.863 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.863 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.863 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:53.863 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.863 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.863 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.863 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.863 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.863 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.863 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.863 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.863 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.863 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.863 "name": "Existed_Raid", 00:08:53.864 "uuid": "92499e4c-a622-4fa1-8d9b-e90fade87db5", 00:08:53.864 "strip_size_kb": 64, 00:08:53.864 "state": "online", 00:08:53.864 "raid_level": "raid0", 00:08:53.864 "superblock": true, 00:08:53.864 "num_base_bdevs": 2, 00:08:53.864 "num_base_bdevs_discovered": 2, 00:08:53.864 "num_base_bdevs_operational": 2, 00:08:53.864 "base_bdevs_list": [ 00:08:53.864 { 00:08:53.864 "name": "BaseBdev1", 00:08:53.864 "uuid": "9dc88af4-3a13-4994-a209-0c3b231eb872", 00:08:53.864 "is_configured": true, 00:08:53.864 "data_offset": 2048, 00:08:53.864 "data_size": 63488 00:08:53.864 }, 00:08:53.864 { 00:08:53.864 "name": "BaseBdev2", 00:08:53.864 "uuid": "ee7faf1e-4553-42be-abb5-ca9c8d5e47da", 00:08:53.864 "is_configured": true, 00:08:53.864 "data_offset": 2048, 00:08:53.864 "data_size": 63488 00:08:53.864 } 00:08:53.864 ] 00:08:53.864 }' 00:08:53.864 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.864 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.124 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:54.124 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:54.124 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:54.124 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:54.124 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:54.124 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:54.124 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:54.124 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:54.124 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.124 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.124 [2024-12-06 03:59:47.440338] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.124 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.124 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:54.124 "name": "Existed_Raid", 00:08:54.124 "aliases": [ 00:08:54.124 "92499e4c-a622-4fa1-8d9b-e90fade87db5" 00:08:54.124 ], 00:08:54.124 "product_name": "Raid Volume", 00:08:54.124 "block_size": 512, 00:08:54.124 "num_blocks": 126976, 00:08:54.124 "uuid": "92499e4c-a622-4fa1-8d9b-e90fade87db5", 00:08:54.124 "assigned_rate_limits": { 00:08:54.124 "rw_ios_per_sec": 0, 00:08:54.124 "rw_mbytes_per_sec": 0, 00:08:54.124 "r_mbytes_per_sec": 0, 00:08:54.124 "w_mbytes_per_sec": 0 00:08:54.124 }, 00:08:54.124 "claimed": false, 00:08:54.124 "zoned": false, 00:08:54.124 "supported_io_types": { 00:08:54.124 "read": true, 00:08:54.124 "write": true, 00:08:54.124 "unmap": true, 00:08:54.125 "flush": true, 00:08:54.125 "reset": true, 00:08:54.125 "nvme_admin": false, 00:08:54.125 "nvme_io": false, 00:08:54.125 "nvme_io_md": false, 00:08:54.125 "write_zeroes": true, 00:08:54.125 "zcopy": false, 00:08:54.125 "get_zone_info": false, 00:08:54.125 "zone_management": false, 00:08:54.125 "zone_append": false, 00:08:54.125 "compare": false, 00:08:54.125 "compare_and_write": false, 00:08:54.125 "abort": false, 00:08:54.125 "seek_hole": false, 00:08:54.125 "seek_data": false, 00:08:54.125 "copy": false, 00:08:54.125 "nvme_iov_md": false 00:08:54.125 }, 00:08:54.125 "memory_domains": [ 00:08:54.125 { 00:08:54.125 "dma_device_id": "system", 00:08:54.125 "dma_device_type": 1 00:08:54.125 }, 00:08:54.125 { 00:08:54.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.125 "dma_device_type": 2 00:08:54.125 }, 00:08:54.125 { 00:08:54.125 "dma_device_id": "system", 00:08:54.125 "dma_device_type": 1 00:08:54.125 }, 00:08:54.125 { 00:08:54.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.125 "dma_device_type": 2 00:08:54.125 } 00:08:54.125 ], 00:08:54.125 "driver_specific": { 00:08:54.125 "raid": { 00:08:54.125 "uuid": "92499e4c-a622-4fa1-8d9b-e90fade87db5", 00:08:54.125 "strip_size_kb": 64, 00:08:54.125 "state": "online", 00:08:54.125 "raid_level": "raid0", 00:08:54.125 "superblock": true, 00:08:54.125 "num_base_bdevs": 2, 00:08:54.125 "num_base_bdevs_discovered": 2, 00:08:54.125 "num_base_bdevs_operational": 2, 00:08:54.125 "base_bdevs_list": [ 00:08:54.125 { 00:08:54.125 "name": "BaseBdev1", 00:08:54.125 "uuid": "9dc88af4-3a13-4994-a209-0c3b231eb872", 00:08:54.125 "is_configured": true, 00:08:54.125 "data_offset": 2048, 00:08:54.125 "data_size": 63488 00:08:54.125 }, 00:08:54.125 { 00:08:54.125 "name": "BaseBdev2", 00:08:54.125 "uuid": "ee7faf1e-4553-42be-abb5-ca9c8d5e47da", 00:08:54.125 "is_configured": true, 00:08:54.125 "data_offset": 2048, 00:08:54.125 "data_size": 63488 00:08:54.125 } 00:08:54.125 ] 00:08:54.125 } 00:08:54.125 } 00:08:54.125 }' 00:08:54.125 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:54.384 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:54.384 BaseBdev2' 00:08:54.384 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.384 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:54.384 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.384 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.384 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:54.385 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.385 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.385 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.385 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.385 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.385 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.385 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:54.385 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.385 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.385 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.385 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.385 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.385 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.385 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:54.385 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.385 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.385 [2024-12-06 03:59:47.667606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:54.385 [2024-12-06 03:59:47.667705] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.385 [2024-12-06 03:59:47.667770] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.644 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.644 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:54.644 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:54.644 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:54.644 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:54.644 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:54.644 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:54.644 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.644 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:54.644 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.644 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.644 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:54.644 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.644 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.644 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.644 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.644 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.644 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.644 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.644 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.644 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.644 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.644 "name": "Existed_Raid", 00:08:54.644 "uuid": "92499e4c-a622-4fa1-8d9b-e90fade87db5", 00:08:54.644 "strip_size_kb": 64, 00:08:54.644 "state": "offline", 00:08:54.644 "raid_level": "raid0", 00:08:54.644 "superblock": true, 00:08:54.644 "num_base_bdevs": 2, 00:08:54.644 "num_base_bdevs_discovered": 1, 00:08:54.644 "num_base_bdevs_operational": 1, 00:08:54.644 "base_bdevs_list": [ 00:08:54.644 { 00:08:54.644 "name": null, 00:08:54.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.644 "is_configured": false, 00:08:54.644 "data_offset": 0, 00:08:54.644 "data_size": 63488 00:08:54.644 }, 00:08:54.644 { 00:08:54.644 "name": "BaseBdev2", 00:08:54.644 "uuid": "ee7faf1e-4553-42be-abb5-ca9c8d5e47da", 00:08:54.644 "is_configured": true, 00:08:54.645 "data_offset": 2048, 00:08:54.645 "data_size": 63488 00:08:54.645 } 00:08:54.645 ] 00:08:54.645 }' 00:08:54.645 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.645 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.904 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:54.904 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:54.904 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.904 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.904 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.904 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:54.904 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.904 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:54.904 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:54.904 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:54.904 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.904 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.904 [2024-12-06 03:59:48.225877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:54.904 [2024-12-06 03:59:48.225935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:55.164 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.164 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:55.164 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:55.164 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.164 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.164 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:55.164 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.164 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.164 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:55.164 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:55.164 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:55.164 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61043 00:08:55.164 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61043 ']' 00:08:55.164 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61043 00:08:55.164 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:55.164 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.164 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61043 00:08:55.164 killing process with pid 61043 00:08:55.164 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:55.164 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:55.164 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61043' 00:08:55.164 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61043 00:08:55.164 [2024-12-06 03:59:48.411924] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:55.164 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61043 00:08:55.164 [2024-12-06 03:59:48.429272] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.547 ************************************ 00:08:56.547 END TEST raid_state_function_test_sb 00:08:56.547 ************************************ 00:08:56.547 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:56.547 00:08:56.547 real 0m4.812s 00:08:56.547 user 0m6.944s 00:08:56.547 sys 0m0.728s 00:08:56.547 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.547 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.547 03:59:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:08:56.547 03:59:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:56.547 03:59:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.547 03:59:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:56.547 ************************************ 00:08:56.547 START TEST raid_superblock_test 00:08:56.547 ************************************ 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61295 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61295 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61295 ']' 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.547 03:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.547 [2024-12-06 03:59:49.686064] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:08:56.547 [2024-12-06 03:59:49.686265] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61295 ] 00:08:56.547 [2024-12-06 03:59:49.838118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.807 [2024-12-06 03:59:49.949574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.807 [2024-12-06 03:59:50.149706] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.807 [2024-12-06 03:59:50.149818] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.377 malloc1 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.377 [2024-12-06 03:59:50.576409] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:57.377 [2024-12-06 03:59:50.576514] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.377 [2024-12-06 03:59:50.576575] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:57.377 [2024-12-06 03:59:50.576610] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.377 [2024-12-06 03:59:50.578857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.377 [2024-12-06 03:59:50.578932] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:57.377 pt1 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.377 malloc2 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.377 [2024-12-06 03:59:50.630821] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:57.377 [2024-12-06 03:59:50.630930] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.377 [2024-12-06 03:59:50.630995] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:57.377 [2024-12-06 03:59:50.631035] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.377 [2024-12-06 03:59:50.633299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.377 [2024-12-06 03:59:50.633373] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:57.377 pt2 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.377 [2024-12-06 03:59:50.642863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:57.377 [2024-12-06 03:59:50.644660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:57.377 [2024-12-06 03:59:50.644815] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:57.377 [2024-12-06 03:59:50.644829] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:57.377 [2024-12-06 03:59:50.645168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:57.377 [2024-12-06 03:59:50.645368] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:57.377 [2024-12-06 03:59:50.645438] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:57.377 [2024-12-06 03:59:50.645668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.377 03:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.378 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.378 "name": "raid_bdev1", 00:08:57.378 "uuid": "37dae8d6-8d8b-4fb9-8523-15254af32fb7", 00:08:57.378 "strip_size_kb": 64, 00:08:57.378 "state": "online", 00:08:57.378 "raid_level": "raid0", 00:08:57.378 "superblock": true, 00:08:57.378 "num_base_bdevs": 2, 00:08:57.378 "num_base_bdevs_discovered": 2, 00:08:57.378 "num_base_bdevs_operational": 2, 00:08:57.378 "base_bdevs_list": [ 00:08:57.378 { 00:08:57.378 "name": "pt1", 00:08:57.378 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:57.378 "is_configured": true, 00:08:57.378 "data_offset": 2048, 00:08:57.378 "data_size": 63488 00:08:57.378 }, 00:08:57.378 { 00:08:57.378 "name": "pt2", 00:08:57.378 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:57.378 "is_configured": true, 00:08:57.378 "data_offset": 2048, 00:08:57.378 "data_size": 63488 00:08:57.378 } 00:08:57.378 ] 00:08:57.378 }' 00:08:57.378 03:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.378 03:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.968 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:57.968 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:57.968 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:57.968 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:57.968 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:57.968 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:57.968 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:57.968 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.968 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.968 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:57.968 [2024-12-06 03:59:51.094544] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.968 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.968 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:57.968 "name": "raid_bdev1", 00:08:57.968 "aliases": [ 00:08:57.968 "37dae8d6-8d8b-4fb9-8523-15254af32fb7" 00:08:57.968 ], 00:08:57.968 "product_name": "Raid Volume", 00:08:57.968 "block_size": 512, 00:08:57.968 "num_blocks": 126976, 00:08:57.968 "uuid": "37dae8d6-8d8b-4fb9-8523-15254af32fb7", 00:08:57.968 "assigned_rate_limits": { 00:08:57.968 "rw_ios_per_sec": 0, 00:08:57.968 "rw_mbytes_per_sec": 0, 00:08:57.968 "r_mbytes_per_sec": 0, 00:08:57.968 "w_mbytes_per_sec": 0 00:08:57.968 }, 00:08:57.968 "claimed": false, 00:08:57.968 "zoned": false, 00:08:57.968 "supported_io_types": { 00:08:57.969 "read": true, 00:08:57.969 "write": true, 00:08:57.969 "unmap": true, 00:08:57.969 "flush": true, 00:08:57.969 "reset": true, 00:08:57.969 "nvme_admin": false, 00:08:57.969 "nvme_io": false, 00:08:57.969 "nvme_io_md": false, 00:08:57.969 "write_zeroes": true, 00:08:57.969 "zcopy": false, 00:08:57.969 "get_zone_info": false, 00:08:57.969 "zone_management": false, 00:08:57.969 "zone_append": false, 00:08:57.969 "compare": false, 00:08:57.969 "compare_and_write": false, 00:08:57.969 "abort": false, 00:08:57.969 "seek_hole": false, 00:08:57.969 "seek_data": false, 00:08:57.969 "copy": false, 00:08:57.969 "nvme_iov_md": false 00:08:57.969 }, 00:08:57.969 "memory_domains": [ 00:08:57.969 { 00:08:57.969 "dma_device_id": "system", 00:08:57.969 "dma_device_type": 1 00:08:57.969 }, 00:08:57.969 { 00:08:57.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.969 "dma_device_type": 2 00:08:57.969 }, 00:08:57.969 { 00:08:57.969 "dma_device_id": "system", 00:08:57.969 "dma_device_type": 1 00:08:57.969 }, 00:08:57.969 { 00:08:57.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.969 "dma_device_type": 2 00:08:57.969 } 00:08:57.969 ], 00:08:57.969 "driver_specific": { 00:08:57.969 "raid": { 00:08:57.969 "uuid": "37dae8d6-8d8b-4fb9-8523-15254af32fb7", 00:08:57.969 "strip_size_kb": 64, 00:08:57.969 "state": "online", 00:08:57.969 "raid_level": "raid0", 00:08:57.969 "superblock": true, 00:08:57.969 "num_base_bdevs": 2, 00:08:57.969 "num_base_bdevs_discovered": 2, 00:08:57.969 "num_base_bdevs_operational": 2, 00:08:57.969 "base_bdevs_list": [ 00:08:57.969 { 00:08:57.969 "name": "pt1", 00:08:57.969 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:57.969 "is_configured": true, 00:08:57.969 "data_offset": 2048, 00:08:57.969 "data_size": 63488 00:08:57.969 }, 00:08:57.969 { 00:08:57.969 "name": "pt2", 00:08:57.969 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:57.969 "is_configured": true, 00:08:57.969 "data_offset": 2048, 00:08:57.969 "data_size": 63488 00:08:57.969 } 00:08:57.969 ] 00:08:57.969 } 00:08:57.969 } 00:08:57.969 }' 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:57.969 pt2' 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.969 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.969 [2024-12-06 03:59:51.309989] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=37dae8d6-8d8b-4fb9-8523-15254af32fb7 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 37dae8d6-8d8b-4fb9-8523-15254af32fb7 ']' 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.230 [2024-12-06 03:59:51.337645] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:58.230 [2024-12-06 03:59:51.337724] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.230 [2024-12-06 03:59:51.337821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.230 [2024-12-06 03:59:51.337875] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:58.230 [2024-12-06 03:59:51.337888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.230 [2024-12-06 03:59:51.457496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:58.230 [2024-12-06 03:59:51.459581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:58.230 [2024-12-06 03:59:51.459713] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:58.230 [2024-12-06 03:59:51.459838] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:58.230 [2024-12-06 03:59:51.459901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:58.230 [2024-12-06 03:59:51.459952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:58.230 request: 00:08:58.230 { 00:08:58.230 "name": "raid_bdev1", 00:08:58.230 "raid_level": "raid0", 00:08:58.230 "base_bdevs": [ 00:08:58.230 "malloc1", 00:08:58.230 "malloc2" 00:08:58.230 ], 00:08:58.230 "strip_size_kb": 64, 00:08:58.230 "superblock": false, 00:08:58.230 "method": "bdev_raid_create", 00:08:58.230 "req_id": 1 00:08:58.230 } 00:08:58.230 Got JSON-RPC error response 00:08:58.230 response: 00:08:58.230 { 00:08:58.230 "code": -17, 00:08:58.230 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:58.230 } 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.230 [2024-12-06 03:59:51.521364] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:58.230 [2024-12-06 03:59:51.521484] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.230 [2024-12-06 03:59:51.521523] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:58.230 [2024-12-06 03:59:51.521557] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.230 [2024-12-06 03:59:51.523937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.230 [2024-12-06 03:59:51.524018] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:58.230 [2024-12-06 03:59:51.524166] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:58.230 [2024-12-06 03:59:51.524277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:58.230 pt1 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.230 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:58.231 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.231 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.231 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.231 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.231 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.231 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.231 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.231 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.231 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.231 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.231 "name": "raid_bdev1", 00:08:58.231 "uuid": "37dae8d6-8d8b-4fb9-8523-15254af32fb7", 00:08:58.231 "strip_size_kb": 64, 00:08:58.231 "state": "configuring", 00:08:58.231 "raid_level": "raid0", 00:08:58.231 "superblock": true, 00:08:58.231 "num_base_bdevs": 2, 00:08:58.231 "num_base_bdevs_discovered": 1, 00:08:58.231 "num_base_bdevs_operational": 2, 00:08:58.231 "base_bdevs_list": [ 00:08:58.231 { 00:08:58.231 "name": "pt1", 00:08:58.231 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:58.231 "is_configured": true, 00:08:58.231 "data_offset": 2048, 00:08:58.231 "data_size": 63488 00:08:58.231 }, 00:08:58.231 { 00:08:58.231 "name": null, 00:08:58.231 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:58.231 "is_configured": false, 00:08:58.231 "data_offset": 2048, 00:08:58.231 "data_size": 63488 00:08:58.231 } 00:08:58.231 ] 00:08:58.231 }' 00:08:58.231 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.231 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.800 [2024-12-06 03:59:51.964615] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:58.800 [2024-12-06 03:59:51.964695] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.800 [2024-12-06 03:59:51.964719] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:58.800 [2024-12-06 03:59:51.964730] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.800 [2024-12-06 03:59:51.965253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.800 [2024-12-06 03:59:51.965276] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:58.800 [2024-12-06 03:59:51.965363] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:58.800 [2024-12-06 03:59:51.965391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:58.800 [2024-12-06 03:59:51.965514] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:58.800 [2024-12-06 03:59:51.965525] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:58.800 [2024-12-06 03:59:51.965785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:58.800 [2024-12-06 03:59:51.965930] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:58.800 [2024-12-06 03:59:51.965944] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:58.800 [2024-12-06 03:59:51.966130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.800 pt2 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.800 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.800 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.800 "name": "raid_bdev1", 00:08:58.800 "uuid": "37dae8d6-8d8b-4fb9-8523-15254af32fb7", 00:08:58.800 "strip_size_kb": 64, 00:08:58.800 "state": "online", 00:08:58.800 "raid_level": "raid0", 00:08:58.800 "superblock": true, 00:08:58.800 "num_base_bdevs": 2, 00:08:58.800 "num_base_bdevs_discovered": 2, 00:08:58.800 "num_base_bdevs_operational": 2, 00:08:58.800 "base_bdevs_list": [ 00:08:58.800 { 00:08:58.800 "name": "pt1", 00:08:58.800 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:58.800 "is_configured": true, 00:08:58.800 "data_offset": 2048, 00:08:58.800 "data_size": 63488 00:08:58.800 }, 00:08:58.800 { 00:08:58.800 "name": "pt2", 00:08:58.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:58.800 "is_configured": true, 00:08:58.800 "data_offset": 2048, 00:08:58.800 "data_size": 63488 00:08:58.800 } 00:08:58.800 ] 00:08:58.800 }' 00:08:58.800 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.800 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.061 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:59.061 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:59.061 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:59.061 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:59.061 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:59.061 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:59.061 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:59.061 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:59.061 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.061 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.061 [2024-12-06 03:59:52.400324] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:59.321 "name": "raid_bdev1", 00:08:59.321 "aliases": [ 00:08:59.321 "37dae8d6-8d8b-4fb9-8523-15254af32fb7" 00:08:59.321 ], 00:08:59.321 "product_name": "Raid Volume", 00:08:59.321 "block_size": 512, 00:08:59.321 "num_blocks": 126976, 00:08:59.321 "uuid": "37dae8d6-8d8b-4fb9-8523-15254af32fb7", 00:08:59.321 "assigned_rate_limits": { 00:08:59.321 "rw_ios_per_sec": 0, 00:08:59.321 "rw_mbytes_per_sec": 0, 00:08:59.321 "r_mbytes_per_sec": 0, 00:08:59.321 "w_mbytes_per_sec": 0 00:08:59.321 }, 00:08:59.321 "claimed": false, 00:08:59.321 "zoned": false, 00:08:59.321 "supported_io_types": { 00:08:59.321 "read": true, 00:08:59.321 "write": true, 00:08:59.321 "unmap": true, 00:08:59.321 "flush": true, 00:08:59.321 "reset": true, 00:08:59.321 "nvme_admin": false, 00:08:59.321 "nvme_io": false, 00:08:59.321 "nvme_io_md": false, 00:08:59.321 "write_zeroes": true, 00:08:59.321 "zcopy": false, 00:08:59.321 "get_zone_info": false, 00:08:59.321 "zone_management": false, 00:08:59.321 "zone_append": false, 00:08:59.321 "compare": false, 00:08:59.321 "compare_and_write": false, 00:08:59.321 "abort": false, 00:08:59.321 "seek_hole": false, 00:08:59.321 "seek_data": false, 00:08:59.321 "copy": false, 00:08:59.321 "nvme_iov_md": false 00:08:59.321 }, 00:08:59.321 "memory_domains": [ 00:08:59.321 { 00:08:59.321 "dma_device_id": "system", 00:08:59.321 "dma_device_type": 1 00:08:59.321 }, 00:08:59.321 { 00:08:59.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.321 "dma_device_type": 2 00:08:59.321 }, 00:08:59.321 { 00:08:59.321 "dma_device_id": "system", 00:08:59.321 "dma_device_type": 1 00:08:59.321 }, 00:08:59.321 { 00:08:59.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.321 "dma_device_type": 2 00:08:59.321 } 00:08:59.321 ], 00:08:59.321 "driver_specific": { 00:08:59.321 "raid": { 00:08:59.321 "uuid": "37dae8d6-8d8b-4fb9-8523-15254af32fb7", 00:08:59.321 "strip_size_kb": 64, 00:08:59.321 "state": "online", 00:08:59.321 "raid_level": "raid0", 00:08:59.321 "superblock": true, 00:08:59.321 "num_base_bdevs": 2, 00:08:59.321 "num_base_bdevs_discovered": 2, 00:08:59.321 "num_base_bdevs_operational": 2, 00:08:59.321 "base_bdevs_list": [ 00:08:59.321 { 00:08:59.321 "name": "pt1", 00:08:59.321 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.321 "is_configured": true, 00:08:59.321 "data_offset": 2048, 00:08:59.321 "data_size": 63488 00:08:59.321 }, 00:08:59.321 { 00:08:59.321 "name": "pt2", 00:08:59.321 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.321 "is_configured": true, 00:08:59.321 "data_offset": 2048, 00:08:59.321 "data_size": 63488 00:08:59.321 } 00:08:59.321 ] 00:08:59.321 } 00:08:59.321 } 00:08:59.321 }' 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:59.321 pt2' 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:59.321 [2024-12-06 03:59:52.615864] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 37dae8d6-8d8b-4fb9-8523-15254af32fb7 '!=' 37dae8d6-8d8b-4fb9-8523-15254af32fb7 ']' 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61295 00:08:59.321 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61295 ']' 00:08:59.322 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61295 00:08:59.322 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:59.322 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.322 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61295 00:08:59.581 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:59.581 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:59.581 killing process with pid 61295 00:08:59.581 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61295' 00:08:59.581 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61295 00:08:59.581 [2024-12-06 03:59:52.699437] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:59.581 [2024-12-06 03:59:52.699534] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:59.581 [2024-12-06 03:59:52.699589] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:59.581 [2024-12-06 03:59:52.699603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:59.581 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61295 00:08:59.581 [2024-12-06 03:59:52.911293] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:00.966 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:00.966 00:09:00.966 real 0m4.436s 00:09:00.966 user 0m6.251s 00:09:00.966 sys 0m0.673s 00:09:00.966 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.966 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.966 ************************************ 00:09:00.966 END TEST raid_superblock_test 00:09:00.966 ************************************ 00:09:00.966 03:59:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:09:00.966 03:59:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:00.966 03:59:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.966 03:59:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:00.966 ************************************ 00:09:00.966 START TEST raid_read_error_test 00:09:00.966 ************************************ 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rLfY8aZ812 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61501 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61501 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61501 ']' 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.966 03:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.966 [2024-12-06 03:59:54.201592] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:09:00.966 [2024-12-06 03:59:54.201835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61501 ] 00:09:01.224 [2024-12-06 03:59:54.392861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.225 [2024-12-06 03:59:54.509759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.484 [2024-12-06 03:59:54.723313] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.484 [2024-12-06 03:59:54.723461] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.742 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.742 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:01.742 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:01.742 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:01.742 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.742 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.000 BaseBdev1_malloc 00:09:02.000 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.000 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:02.000 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.000 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.000 true 00:09:02.000 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.000 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:02.000 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.000 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.000 [2024-12-06 03:59:55.138329] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:02.000 [2024-12-06 03:59:55.138399] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.000 [2024-12-06 03:59:55.138421] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:02.000 [2024-12-06 03:59:55.138431] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.000 [2024-12-06 03:59:55.140676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.000 [2024-12-06 03:59:55.140723] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:02.000 BaseBdev1 00:09:02.000 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.000 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.000 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:02.000 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.000 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.000 BaseBdev2_malloc 00:09:02.000 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.000 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:02.000 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.000 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.000 true 00:09:02.000 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.000 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:02.000 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.000 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.000 [2024-12-06 03:59:55.196855] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:02.000 [2024-12-06 03:59:55.196914] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.000 [2024-12-06 03:59:55.196947] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:02.000 [2024-12-06 03:59:55.196959] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.000 [2024-12-06 03:59:55.199242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.000 [2024-12-06 03:59:55.199280] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:02.000 BaseBdev2 00:09:02.000 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.001 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:02.001 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.001 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.001 [2024-12-06 03:59:55.204912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.001 [2024-12-06 03:59:55.206792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.001 [2024-12-06 03:59:55.207074] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:02.001 [2024-12-06 03:59:55.207133] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:02.001 [2024-12-06 03:59:55.207425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:02.001 [2024-12-06 03:59:55.207661] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:02.001 [2024-12-06 03:59:55.207711] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:02.001 [2024-12-06 03:59:55.207932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.001 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.001 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:02.001 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:02.001 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.001 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.001 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.001 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:02.001 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.001 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.001 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.001 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.001 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:02.001 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.001 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.001 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.001 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.001 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.001 "name": "raid_bdev1", 00:09:02.001 "uuid": "ffc77c02-26ec-43db-91e2-9aa464055d42", 00:09:02.001 "strip_size_kb": 64, 00:09:02.001 "state": "online", 00:09:02.001 "raid_level": "raid0", 00:09:02.001 "superblock": true, 00:09:02.001 "num_base_bdevs": 2, 00:09:02.001 "num_base_bdevs_discovered": 2, 00:09:02.001 "num_base_bdevs_operational": 2, 00:09:02.001 "base_bdevs_list": [ 00:09:02.001 { 00:09:02.001 "name": "BaseBdev1", 00:09:02.001 "uuid": "cfe3985e-aebb-5974-bab9-5b164c9614cc", 00:09:02.001 "is_configured": true, 00:09:02.001 "data_offset": 2048, 00:09:02.001 "data_size": 63488 00:09:02.001 }, 00:09:02.001 { 00:09:02.001 "name": "BaseBdev2", 00:09:02.001 "uuid": "208f3102-6bf6-5167-94c7-663811325500", 00:09:02.001 "is_configured": true, 00:09:02.001 "data_offset": 2048, 00:09:02.001 "data_size": 63488 00:09:02.001 } 00:09:02.001 ] 00:09:02.001 }' 00:09:02.001 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.001 03:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.570 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:02.570 03:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:02.570 [2024-12-06 03:59:55.733430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.506 "name": "raid_bdev1", 00:09:03.506 "uuid": "ffc77c02-26ec-43db-91e2-9aa464055d42", 00:09:03.506 "strip_size_kb": 64, 00:09:03.506 "state": "online", 00:09:03.506 "raid_level": "raid0", 00:09:03.506 "superblock": true, 00:09:03.506 "num_base_bdevs": 2, 00:09:03.506 "num_base_bdevs_discovered": 2, 00:09:03.506 "num_base_bdevs_operational": 2, 00:09:03.506 "base_bdevs_list": [ 00:09:03.506 { 00:09:03.506 "name": "BaseBdev1", 00:09:03.506 "uuid": "cfe3985e-aebb-5974-bab9-5b164c9614cc", 00:09:03.506 "is_configured": true, 00:09:03.506 "data_offset": 2048, 00:09:03.506 "data_size": 63488 00:09:03.506 }, 00:09:03.506 { 00:09:03.506 "name": "BaseBdev2", 00:09:03.506 "uuid": "208f3102-6bf6-5167-94c7-663811325500", 00:09:03.506 "is_configured": true, 00:09:03.506 "data_offset": 2048, 00:09:03.506 "data_size": 63488 00:09:03.506 } 00:09:03.506 ] 00:09:03.506 }' 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.506 03:59:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.764 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:03.764 03:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.764 03:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.764 [2024-12-06 03:59:57.090034] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:03.764 [2024-12-06 03:59:57.090168] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:03.764 [2024-12-06 03:59:57.093177] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.764 [2024-12-06 03:59:57.093268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.764 [2024-12-06 03:59:57.093321] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:03.764 [2024-12-06 03:59:57.093366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:03.764 03:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.764 { 00:09:03.764 "results": [ 00:09:03.764 { 00:09:03.764 "job": "raid_bdev1", 00:09:03.764 "core_mask": "0x1", 00:09:03.764 "workload": "randrw", 00:09:03.764 "percentage": 50, 00:09:03.764 "status": "finished", 00:09:03.764 "queue_depth": 1, 00:09:03.764 "io_size": 131072, 00:09:03.764 "runtime": 1.357734, 00:09:03.764 "iops": 14793.76667300075, 00:09:03.764 "mibps": 1849.2208341250937, 00:09:03.764 "io_failed": 1, 00:09:03.764 "io_timeout": 0, 00:09:03.764 "avg_latency_us": 93.54601944423852, 00:09:03.764 "min_latency_us": 27.165065502183406, 00:09:03.764 "max_latency_us": 1774.3371179039302 00:09:03.764 } 00:09:03.764 ], 00:09:03.764 "core_count": 1 00:09:03.764 } 00:09:03.764 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61501 00:09:03.764 03:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61501 ']' 00:09:03.764 03:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61501 00:09:03.765 03:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:03.765 03:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.765 03:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61501 00:09:04.024 03:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:04.024 03:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:04.024 03:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61501' 00:09:04.024 killing process with pid 61501 00:09:04.024 03:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61501 00:09:04.024 [2024-12-06 03:59:57.137846] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:04.024 03:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61501 00:09:04.024 [2024-12-06 03:59:57.278476] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:05.400 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:05.400 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:05.400 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rLfY8aZ812 00:09:05.400 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:05.400 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:05.400 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:05.400 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:05.400 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:05.400 00:09:05.400 real 0m4.369s 00:09:05.400 user 0m5.283s 00:09:05.400 sys 0m0.535s 00:09:05.400 ************************************ 00:09:05.400 END TEST raid_read_error_test 00:09:05.400 ************************************ 00:09:05.400 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.400 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.400 03:59:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:09:05.401 03:59:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:05.401 03:59:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.401 03:59:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:05.401 ************************************ 00:09:05.401 START TEST raid_write_error_test 00:09:05.401 ************************************ 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lgjQ0sNCzg 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61647 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61647 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61647 ']' 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:05.401 03:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.401 [2024-12-06 03:59:58.646798] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:09:05.401 [2024-12-06 03:59:58.646927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61647 ] 00:09:05.660 [2024-12-06 03:59:58.823381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.660 [2024-12-06 03:59:58.937636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.919 [2024-12-06 03:59:59.141777] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.919 [2024-12-06 03:59:59.141811] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.178 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.178 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:06.178 03:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:06.178 03:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:06.178 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.178 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.437 BaseBdev1_malloc 00:09:06.437 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.437 03:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:06.437 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.437 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.437 true 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.438 [2024-12-06 03:59:59.548035] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:06.438 [2024-12-06 03:59:59.548114] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.438 [2024-12-06 03:59:59.548152] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:06.438 [2024-12-06 03:59:59.548164] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.438 [2024-12-06 03:59:59.550272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.438 [2024-12-06 03:59:59.550310] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:06.438 BaseBdev1 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.438 BaseBdev2_malloc 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.438 true 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.438 [2024-12-06 03:59:59.606747] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:06.438 [2024-12-06 03:59:59.606807] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.438 [2024-12-06 03:59:59.606824] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:06.438 [2024-12-06 03:59:59.606834] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.438 [2024-12-06 03:59:59.609037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.438 [2024-12-06 03:59:59.609091] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:06.438 BaseBdev2 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.438 [2024-12-06 03:59:59.614804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:06.438 [2024-12-06 03:59:59.616826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:06.438 [2024-12-06 03:59:59.617160] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:06.438 [2024-12-06 03:59:59.617190] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:06.438 [2024-12-06 03:59:59.617497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:06.438 [2024-12-06 03:59:59.617699] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:06.438 [2024-12-06 03:59:59.617712] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:06.438 [2024-12-06 03:59:59.617877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.438 "name": "raid_bdev1", 00:09:06.438 "uuid": "48c7f5bc-9033-4cf8-a904-0904799f0dbe", 00:09:06.438 "strip_size_kb": 64, 00:09:06.438 "state": "online", 00:09:06.438 "raid_level": "raid0", 00:09:06.438 "superblock": true, 00:09:06.438 "num_base_bdevs": 2, 00:09:06.438 "num_base_bdevs_discovered": 2, 00:09:06.438 "num_base_bdevs_operational": 2, 00:09:06.438 "base_bdevs_list": [ 00:09:06.438 { 00:09:06.438 "name": "BaseBdev1", 00:09:06.438 "uuid": "ee6de507-f8cc-57f2-afc0-3a83660d6d78", 00:09:06.438 "is_configured": true, 00:09:06.438 "data_offset": 2048, 00:09:06.438 "data_size": 63488 00:09:06.438 }, 00:09:06.438 { 00:09:06.438 "name": "BaseBdev2", 00:09:06.438 "uuid": "fe06fc2a-4fae-5ac8-841c-89f469048a88", 00:09:06.438 "is_configured": true, 00:09:06.438 "data_offset": 2048, 00:09:06.438 "data_size": 63488 00:09:06.438 } 00:09:06.438 ] 00:09:06.438 }' 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.438 03:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.696 04:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:06.696 04:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:06.953 [2024-12-06 04:00:00.107511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.889 "name": "raid_bdev1", 00:09:07.889 "uuid": "48c7f5bc-9033-4cf8-a904-0904799f0dbe", 00:09:07.889 "strip_size_kb": 64, 00:09:07.889 "state": "online", 00:09:07.889 "raid_level": "raid0", 00:09:07.889 "superblock": true, 00:09:07.889 "num_base_bdevs": 2, 00:09:07.889 "num_base_bdevs_discovered": 2, 00:09:07.889 "num_base_bdevs_operational": 2, 00:09:07.889 "base_bdevs_list": [ 00:09:07.889 { 00:09:07.889 "name": "BaseBdev1", 00:09:07.889 "uuid": "ee6de507-f8cc-57f2-afc0-3a83660d6d78", 00:09:07.889 "is_configured": true, 00:09:07.889 "data_offset": 2048, 00:09:07.889 "data_size": 63488 00:09:07.889 }, 00:09:07.889 { 00:09:07.889 "name": "BaseBdev2", 00:09:07.889 "uuid": "fe06fc2a-4fae-5ac8-841c-89f469048a88", 00:09:07.889 "is_configured": true, 00:09:07.889 "data_offset": 2048, 00:09:07.889 "data_size": 63488 00:09:07.889 } 00:09:07.889 ] 00:09:07.889 }' 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.889 04:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.147 04:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:08.147 04:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.147 04:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.147 [2024-12-06 04:00:01.458428] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:08.147 [2024-12-06 04:00:01.458466] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:08.147 [2024-12-06 04:00:01.461283] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.147 [2024-12-06 04:00:01.461391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.147 [2024-12-06 04:00:01.461444] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:08.147 [2024-12-06 04:00:01.461456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:08.147 { 00:09:08.147 "results": [ 00:09:08.147 { 00:09:08.147 "job": "raid_bdev1", 00:09:08.147 "core_mask": "0x1", 00:09:08.147 "workload": "randrw", 00:09:08.147 "percentage": 50, 00:09:08.147 "status": "finished", 00:09:08.147 "queue_depth": 1, 00:09:08.147 "io_size": 131072, 00:09:08.147 "runtime": 1.351669, 00:09:08.147 "iops": 15262.612370336228, 00:09:08.147 "mibps": 1907.8265462920285, 00:09:08.147 "io_failed": 1, 00:09:08.147 "io_timeout": 0, 00:09:08.147 "avg_latency_us": 90.72865977958722, 00:09:08.147 "min_latency_us": 27.276855895196505, 00:09:08.147 "max_latency_us": 1445.2262008733624 00:09:08.147 } 00:09:08.147 ], 00:09:08.147 "core_count": 1 00:09:08.147 } 00:09:08.147 04:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.147 04:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61647 00:09:08.147 04:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61647 ']' 00:09:08.147 04:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61647 00:09:08.147 04:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:08.147 04:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.147 04:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61647 00:09:08.147 killing process with pid 61647 00:09:08.147 04:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.147 04:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.147 04:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61647' 00:09:08.147 04:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61647 00:09:08.147 [2024-12-06 04:00:01.496673] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:08.147 04:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61647 00:09:08.405 [2024-12-06 04:00:01.632141] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:09.794 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lgjQ0sNCzg 00:09:09.794 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:09.794 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:09.794 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:09.794 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:09.794 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:09.794 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:09.794 ************************************ 00:09:09.794 END TEST raid_write_error_test 00:09:09.794 ************************************ 00:09:09.794 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:09.794 00:09:09.794 real 0m4.288s 00:09:09.794 user 0m5.125s 00:09:09.794 sys 0m0.515s 00:09:09.794 04:00:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.794 04:00:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.794 04:00:02 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:09.794 04:00:02 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:09:09.794 04:00:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:09.794 04:00:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.794 04:00:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:09.794 ************************************ 00:09:09.794 START TEST raid_state_function_test 00:09:09.794 ************************************ 00:09:09.794 04:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:09:09.794 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61785 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61785' 00:09:09.795 Process raid pid: 61785 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61785 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61785 ']' 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.795 04:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.795 [2024-12-06 04:00:02.970344] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:09:09.795 [2024-12-06 04:00:02.970536] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.070 [2024-12-06 04:00:03.142391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.070 [2024-12-06 04:00:03.256889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.329 [2024-12-06 04:00:03.474935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.329 [2024-12-06 04:00:03.475053] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.588 [2024-12-06 04:00:03.815615] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:10.588 [2024-12-06 04:00:03.815679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:10.588 [2024-12-06 04:00:03.815690] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:10.588 [2024-12-06 04:00:03.815700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.588 "name": "Existed_Raid", 00:09:10.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.588 "strip_size_kb": 64, 00:09:10.588 "state": "configuring", 00:09:10.588 "raid_level": "concat", 00:09:10.588 "superblock": false, 00:09:10.588 "num_base_bdevs": 2, 00:09:10.588 "num_base_bdevs_discovered": 0, 00:09:10.588 "num_base_bdevs_operational": 2, 00:09:10.588 "base_bdevs_list": [ 00:09:10.588 { 00:09:10.588 "name": "BaseBdev1", 00:09:10.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.588 "is_configured": false, 00:09:10.588 "data_offset": 0, 00:09:10.588 "data_size": 0 00:09:10.588 }, 00:09:10.588 { 00:09:10.588 "name": "BaseBdev2", 00:09:10.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.588 "is_configured": false, 00:09:10.588 "data_offset": 0, 00:09:10.588 "data_size": 0 00:09:10.588 } 00:09:10.588 ] 00:09:10.588 }' 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.588 04:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.156 [2024-12-06 04:00:04.270810] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:11.156 [2024-12-06 04:00:04.270916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.156 [2024-12-06 04:00:04.278782] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:11.156 [2024-12-06 04:00:04.278892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:11.156 [2024-12-06 04:00:04.278925] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:11.156 [2024-12-06 04:00:04.278951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.156 [2024-12-06 04:00:04.322832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:11.156 BaseBdev1 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.156 [ 00:09:11.156 { 00:09:11.156 "name": "BaseBdev1", 00:09:11.156 "aliases": [ 00:09:11.156 "cb4d8610-01b1-4239-8197-93fe878512d3" 00:09:11.156 ], 00:09:11.156 "product_name": "Malloc disk", 00:09:11.156 "block_size": 512, 00:09:11.156 "num_blocks": 65536, 00:09:11.156 "uuid": "cb4d8610-01b1-4239-8197-93fe878512d3", 00:09:11.156 "assigned_rate_limits": { 00:09:11.156 "rw_ios_per_sec": 0, 00:09:11.156 "rw_mbytes_per_sec": 0, 00:09:11.156 "r_mbytes_per_sec": 0, 00:09:11.156 "w_mbytes_per_sec": 0 00:09:11.156 }, 00:09:11.156 "claimed": true, 00:09:11.156 "claim_type": "exclusive_write", 00:09:11.156 "zoned": false, 00:09:11.156 "supported_io_types": { 00:09:11.156 "read": true, 00:09:11.156 "write": true, 00:09:11.156 "unmap": true, 00:09:11.156 "flush": true, 00:09:11.156 "reset": true, 00:09:11.156 "nvme_admin": false, 00:09:11.156 "nvme_io": false, 00:09:11.156 "nvme_io_md": false, 00:09:11.156 "write_zeroes": true, 00:09:11.156 "zcopy": true, 00:09:11.156 "get_zone_info": false, 00:09:11.156 "zone_management": false, 00:09:11.156 "zone_append": false, 00:09:11.156 "compare": false, 00:09:11.156 "compare_and_write": false, 00:09:11.156 "abort": true, 00:09:11.156 "seek_hole": false, 00:09:11.156 "seek_data": false, 00:09:11.156 "copy": true, 00:09:11.156 "nvme_iov_md": false 00:09:11.156 }, 00:09:11.156 "memory_domains": [ 00:09:11.156 { 00:09:11.156 "dma_device_id": "system", 00:09:11.156 "dma_device_type": 1 00:09:11.156 }, 00:09:11.156 { 00:09:11.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.156 "dma_device_type": 2 00:09:11.156 } 00:09:11.156 ], 00:09:11.156 "driver_specific": {} 00:09:11.156 } 00:09:11.156 ] 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.156 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.156 "name": "Existed_Raid", 00:09:11.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.156 "strip_size_kb": 64, 00:09:11.156 "state": "configuring", 00:09:11.156 "raid_level": "concat", 00:09:11.156 "superblock": false, 00:09:11.156 "num_base_bdevs": 2, 00:09:11.156 "num_base_bdevs_discovered": 1, 00:09:11.156 "num_base_bdevs_operational": 2, 00:09:11.156 "base_bdevs_list": [ 00:09:11.156 { 00:09:11.156 "name": "BaseBdev1", 00:09:11.156 "uuid": "cb4d8610-01b1-4239-8197-93fe878512d3", 00:09:11.156 "is_configured": true, 00:09:11.156 "data_offset": 0, 00:09:11.156 "data_size": 65536 00:09:11.156 }, 00:09:11.156 { 00:09:11.156 "name": "BaseBdev2", 00:09:11.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.157 "is_configured": false, 00:09:11.157 "data_offset": 0, 00:09:11.157 "data_size": 0 00:09:11.157 } 00:09:11.157 ] 00:09:11.157 }' 00:09:11.157 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.157 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.725 [2024-12-06 04:00:04.806084] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:11.725 [2024-12-06 04:00:04.806143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.725 [2024-12-06 04:00:04.814130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:11.725 [2024-12-06 04:00:04.816306] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:11.725 [2024-12-06 04:00:04.816358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.725 "name": "Existed_Raid", 00:09:11.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.725 "strip_size_kb": 64, 00:09:11.725 "state": "configuring", 00:09:11.725 "raid_level": "concat", 00:09:11.725 "superblock": false, 00:09:11.725 "num_base_bdevs": 2, 00:09:11.725 "num_base_bdevs_discovered": 1, 00:09:11.725 "num_base_bdevs_operational": 2, 00:09:11.725 "base_bdevs_list": [ 00:09:11.725 { 00:09:11.725 "name": "BaseBdev1", 00:09:11.725 "uuid": "cb4d8610-01b1-4239-8197-93fe878512d3", 00:09:11.725 "is_configured": true, 00:09:11.725 "data_offset": 0, 00:09:11.725 "data_size": 65536 00:09:11.725 }, 00:09:11.725 { 00:09:11.725 "name": "BaseBdev2", 00:09:11.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.725 "is_configured": false, 00:09:11.725 "data_offset": 0, 00:09:11.725 "data_size": 0 00:09:11.725 } 00:09:11.725 ] 00:09:11.725 }' 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.725 04:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.983 [2024-12-06 04:00:05.251111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:11.983 [2024-12-06 04:00:05.251258] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:11.983 [2024-12-06 04:00:05.251288] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:11.983 [2024-12-06 04:00:05.251658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:11.983 [2024-12-06 04:00:05.251924] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:11.983 [2024-12-06 04:00:05.251977] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:11.983 [2024-12-06 04:00:05.252383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.983 BaseBdev2 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.983 [ 00:09:11.983 { 00:09:11.983 "name": "BaseBdev2", 00:09:11.983 "aliases": [ 00:09:11.983 "fb294993-6388-4499-bcad-b509a4faa459" 00:09:11.983 ], 00:09:11.983 "product_name": "Malloc disk", 00:09:11.983 "block_size": 512, 00:09:11.983 "num_blocks": 65536, 00:09:11.983 "uuid": "fb294993-6388-4499-bcad-b509a4faa459", 00:09:11.983 "assigned_rate_limits": { 00:09:11.983 "rw_ios_per_sec": 0, 00:09:11.983 "rw_mbytes_per_sec": 0, 00:09:11.983 "r_mbytes_per_sec": 0, 00:09:11.983 "w_mbytes_per_sec": 0 00:09:11.983 }, 00:09:11.983 "claimed": true, 00:09:11.983 "claim_type": "exclusive_write", 00:09:11.983 "zoned": false, 00:09:11.983 "supported_io_types": { 00:09:11.983 "read": true, 00:09:11.983 "write": true, 00:09:11.983 "unmap": true, 00:09:11.983 "flush": true, 00:09:11.983 "reset": true, 00:09:11.983 "nvme_admin": false, 00:09:11.983 "nvme_io": false, 00:09:11.983 "nvme_io_md": false, 00:09:11.983 "write_zeroes": true, 00:09:11.983 "zcopy": true, 00:09:11.983 "get_zone_info": false, 00:09:11.983 "zone_management": false, 00:09:11.983 "zone_append": false, 00:09:11.983 "compare": false, 00:09:11.983 "compare_and_write": false, 00:09:11.983 "abort": true, 00:09:11.983 "seek_hole": false, 00:09:11.983 "seek_data": false, 00:09:11.983 "copy": true, 00:09:11.983 "nvme_iov_md": false 00:09:11.983 }, 00:09:11.983 "memory_domains": [ 00:09:11.983 { 00:09:11.983 "dma_device_id": "system", 00:09:11.983 "dma_device_type": 1 00:09:11.983 }, 00:09:11.983 { 00:09:11.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.983 "dma_device_type": 2 00:09:11.983 } 00:09:11.983 ], 00:09:11.983 "driver_specific": {} 00:09:11.983 } 00:09:11.983 ] 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.983 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.241 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.241 "name": "Existed_Raid", 00:09:12.241 "uuid": "24aef25d-6b9b-4ee1-9609-5c3c501fb736", 00:09:12.241 "strip_size_kb": 64, 00:09:12.241 "state": "online", 00:09:12.241 "raid_level": "concat", 00:09:12.241 "superblock": false, 00:09:12.241 "num_base_bdevs": 2, 00:09:12.241 "num_base_bdevs_discovered": 2, 00:09:12.241 "num_base_bdevs_operational": 2, 00:09:12.241 "base_bdevs_list": [ 00:09:12.241 { 00:09:12.241 "name": "BaseBdev1", 00:09:12.241 "uuid": "cb4d8610-01b1-4239-8197-93fe878512d3", 00:09:12.241 "is_configured": true, 00:09:12.241 "data_offset": 0, 00:09:12.241 "data_size": 65536 00:09:12.241 }, 00:09:12.241 { 00:09:12.241 "name": "BaseBdev2", 00:09:12.241 "uuid": "fb294993-6388-4499-bcad-b509a4faa459", 00:09:12.241 "is_configured": true, 00:09:12.241 "data_offset": 0, 00:09:12.241 "data_size": 65536 00:09:12.241 } 00:09:12.241 ] 00:09:12.241 }' 00:09:12.241 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.241 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.501 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:12.501 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:12.501 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:12.501 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:12.501 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:12.501 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:12.501 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:12.501 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:12.501 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.501 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.501 [2024-12-06 04:00:05.710622] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.501 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.501 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:12.501 "name": "Existed_Raid", 00:09:12.501 "aliases": [ 00:09:12.501 "24aef25d-6b9b-4ee1-9609-5c3c501fb736" 00:09:12.501 ], 00:09:12.501 "product_name": "Raid Volume", 00:09:12.501 "block_size": 512, 00:09:12.501 "num_blocks": 131072, 00:09:12.501 "uuid": "24aef25d-6b9b-4ee1-9609-5c3c501fb736", 00:09:12.501 "assigned_rate_limits": { 00:09:12.501 "rw_ios_per_sec": 0, 00:09:12.501 "rw_mbytes_per_sec": 0, 00:09:12.501 "r_mbytes_per_sec": 0, 00:09:12.501 "w_mbytes_per_sec": 0 00:09:12.501 }, 00:09:12.501 "claimed": false, 00:09:12.501 "zoned": false, 00:09:12.501 "supported_io_types": { 00:09:12.501 "read": true, 00:09:12.501 "write": true, 00:09:12.501 "unmap": true, 00:09:12.501 "flush": true, 00:09:12.501 "reset": true, 00:09:12.501 "nvme_admin": false, 00:09:12.501 "nvme_io": false, 00:09:12.501 "nvme_io_md": false, 00:09:12.501 "write_zeroes": true, 00:09:12.501 "zcopy": false, 00:09:12.501 "get_zone_info": false, 00:09:12.501 "zone_management": false, 00:09:12.501 "zone_append": false, 00:09:12.501 "compare": false, 00:09:12.501 "compare_and_write": false, 00:09:12.501 "abort": false, 00:09:12.501 "seek_hole": false, 00:09:12.501 "seek_data": false, 00:09:12.501 "copy": false, 00:09:12.501 "nvme_iov_md": false 00:09:12.501 }, 00:09:12.501 "memory_domains": [ 00:09:12.501 { 00:09:12.501 "dma_device_id": "system", 00:09:12.501 "dma_device_type": 1 00:09:12.501 }, 00:09:12.501 { 00:09:12.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.501 "dma_device_type": 2 00:09:12.501 }, 00:09:12.501 { 00:09:12.501 "dma_device_id": "system", 00:09:12.501 "dma_device_type": 1 00:09:12.501 }, 00:09:12.501 { 00:09:12.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.501 "dma_device_type": 2 00:09:12.501 } 00:09:12.501 ], 00:09:12.501 "driver_specific": { 00:09:12.501 "raid": { 00:09:12.501 "uuid": "24aef25d-6b9b-4ee1-9609-5c3c501fb736", 00:09:12.501 "strip_size_kb": 64, 00:09:12.501 "state": "online", 00:09:12.501 "raid_level": "concat", 00:09:12.501 "superblock": false, 00:09:12.501 "num_base_bdevs": 2, 00:09:12.501 "num_base_bdevs_discovered": 2, 00:09:12.501 "num_base_bdevs_operational": 2, 00:09:12.501 "base_bdevs_list": [ 00:09:12.501 { 00:09:12.501 "name": "BaseBdev1", 00:09:12.501 "uuid": "cb4d8610-01b1-4239-8197-93fe878512d3", 00:09:12.501 "is_configured": true, 00:09:12.501 "data_offset": 0, 00:09:12.501 "data_size": 65536 00:09:12.501 }, 00:09:12.501 { 00:09:12.501 "name": "BaseBdev2", 00:09:12.501 "uuid": "fb294993-6388-4499-bcad-b509a4faa459", 00:09:12.501 "is_configured": true, 00:09:12.501 "data_offset": 0, 00:09:12.501 "data_size": 65536 00:09:12.501 } 00:09:12.501 ] 00:09:12.501 } 00:09:12.501 } 00:09:12.501 }' 00:09:12.501 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:12.501 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:12.501 BaseBdev2' 00:09:12.501 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.501 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:12.501 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.501 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:12.501 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.501 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.501 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.761 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.761 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.761 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.761 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.761 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.761 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:12.761 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.761 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.761 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.761 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.761 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.761 04:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:12.761 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.761 04:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.761 [2024-12-06 04:00:05.926026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:12.761 [2024-12-06 04:00:05.926137] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.761 [2024-12-06 04:00:05.926222] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.761 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.761 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:12.762 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:12.762 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:12.762 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:12.762 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:12.762 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:12.762 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.762 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:12.762 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.762 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.762 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:12.762 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.762 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.762 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.762 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.762 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.762 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.762 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.762 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.762 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.762 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.762 "name": "Existed_Raid", 00:09:12.762 "uuid": "24aef25d-6b9b-4ee1-9609-5c3c501fb736", 00:09:12.762 "strip_size_kb": 64, 00:09:12.762 "state": "offline", 00:09:12.762 "raid_level": "concat", 00:09:12.762 "superblock": false, 00:09:12.762 "num_base_bdevs": 2, 00:09:12.762 "num_base_bdevs_discovered": 1, 00:09:12.762 "num_base_bdevs_operational": 1, 00:09:12.762 "base_bdevs_list": [ 00:09:12.762 { 00:09:12.762 "name": null, 00:09:12.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.762 "is_configured": false, 00:09:12.762 "data_offset": 0, 00:09:12.762 "data_size": 65536 00:09:12.762 }, 00:09:12.762 { 00:09:12.762 "name": "BaseBdev2", 00:09:12.762 "uuid": "fb294993-6388-4499-bcad-b509a4faa459", 00:09:12.762 "is_configured": true, 00:09:12.762 "data_offset": 0, 00:09:12.762 "data_size": 65536 00:09:12.762 } 00:09:12.762 ] 00:09:12.762 }' 00:09:12.762 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.762 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.331 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.332 [2024-12-06 04:00:06.498516] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:13.332 [2024-12-06 04:00:06.498577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61785 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61785 ']' 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61785 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61785 00:09:13.332 killing process with pid 61785 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61785' 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61785 00:09:13.332 [2024-12-06 04:00:06.683908] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:13.332 04:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61785 00:09:13.592 [2024-12-06 04:00:06.701683] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:14.531 ************************************ 00:09:14.531 END TEST raid_state_function_test 00:09:14.531 ************************************ 00:09:14.531 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:14.531 00:09:14.531 real 0m4.947s 00:09:14.531 user 0m7.149s 00:09:14.531 sys 0m0.764s 00:09:14.531 04:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.531 04:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.531 04:00:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:09:14.531 04:00:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:14.531 04:00:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.531 04:00:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:14.791 ************************************ 00:09:14.791 START TEST raid_state_function_test_sb 00:09:14.791 ************************************ 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62038 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62038' 00:09:14.791 Process raid pid: 62038 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62038 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62038 ']' 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.791 04:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.792 04:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.792 04:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.792 04:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.792 [2024-12-06 04:00:07.987345] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:09:14.792 [2024-12-06 04:00:07.987489] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.052 [2024-12-06 04:00:08.159398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.052 [2024-12-06 04:00:08.276627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.312 [2024-12-06 04:00:08.479567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.312 [2024-12-06 04:00:08.479701] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.572 [2024-12-06 04:00:08.847381] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:15.572 [2024-12-06 04:00:08.847445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:15.572 [2024-12-06 04:00:08.847456] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:15.572 [2024-12-06 04:00:08.847466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.572 "name": "Existed_Raid", 00:09:15.572 "uuid": "77902fec-3d40-4076-a68f-f2facdc70686", 00:09:15.572 "strip_size_kb": 64, 00:09:15.572 "state": "configuring", 00:09:15.572 "raid_level": "concat", 00:09:15.572 "superblock": true, 00:09:15.572 "num_base_bdevs": 2, 00:09:15.572 "num_base_bdevs_discovered": 0, 00:09:15.572 "num_base_bdevs_operational": 2, 00:09:15.572 "base_bdevs_list": [ 00:09:15.572 { 00:09:15.572 "name": "BaseBdev1", 00:09:15.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.572 "is_configured": false, 00:09:15.572 "data_offset": 0, 00:09:15.572 "data_size": 0 00:09:15.572 }, 00:09:15.572 { 00:09:15.572 "name": "BaseBdev2", 00:09:15.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.572 "is_configured": false, 00:09:15.572 "data_offset": 0, 00:09:15.572 "data_size": 0 00:09:15.572 } 00:09:15.572 ] 00:09:15.572 }' 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.572 04:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.151 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:16.151 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.151 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.151 [2024-12-06 04:00:09.242749] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.151 [2024-12-06 04:00:09.242856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:16.151 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.152 [2024-12-06 04:00:09.250715] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:16.152 [2024-12-06 04:00:09.250805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:16.152 [2024-12-06 04:00:09.250851] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:16.152 [2024-12-06 04:00:09.250885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.152 [2024-12-06 04:00:09.296642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:16.152 BaseBdev1 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.152 [ 00:09:16.152 { 00:09:16.152 "name": "BaseBdev1", 00:09:16.152 "aliases": [ 00:09:16.152 "26003b12-760c-4462-9f37-47581647e223" 00:09:16.152 ], 00:09:16.152 "product_name": "Malloc disk", 00:09:16.152 "block_size": 512, 00:09:16.152 "num_blocks": 65536, 00:09:16.152 "uuid": "26003b12-760c-4462-9f37-47581647e223", 00:09:16.152 "assigned_rate_limits": { 00:09:16.152 "rw_ios_per_sec": 0, 00:09:16.152 "rw_mbytes_per_sec": 0, 00:09:16.152 "r_mbytes_per_sec": 0, 00:09:16.152 "w_mbytes_per_sec": 0 00:09:16.152 }, 00:09:16.152 "claimed": true, 00:09:16.152 "claim_type": "exclusive_write", 00:09:16.152 "zoned": false, 00:09:16.152 "supported_io_types": { 00:09:16.152 "read": true, 00:09:16.152 "write": true, 00:09:16.152 "unmap": true, 00:09:16.152 "flush": true, 00:09:16.152 "reset": true, 00:09:16.152 "nvme_admin": false, 00:09:16.152 "nvme_io": false, 00:09:16.152 "nvme_io_md": false, 00:09:16.152 "write_zeroes": true, 00:09:16.152 "zcopy": true, 00:09:16.152 "get_zone_info": false, 00:09:16.152 "zone_management": false, 00:09:16.152 "zone_append": false, 00:09:16.152 "compare": false, 00:09:16.152 "compare_and_write": false, 00:09:16.152 "abort": true, 00:09:16.152 "seek_hole": false, 00:09:16.152 "seek_data": false, 00:09:16.152 "copy": true, 00:09:16.152 "nvme_iov_md": false 00:09:16.152 }, 00:09:16.152 "memory_domains": [ 00:09:16.152 { 00:09:16.152 "dma_device_id": "system", 00:09:16.152 "dma_device_type": 1 00:09:16.152 }, 00:09:16.152 { 00:09:16.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.152 "dma_device_type": 2 00:09:16.152 } 00:09:16.152 ], 00:09:16.152 "driver_specific": {} 00:09:16.152 } 00:09:16.152 ] 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.152 "name": "Existed_Raid", 00:09:16.152 "uuid": "27d66633-50b0-4de5-b27d-c601b908d512", 00:09:16.152 "strip_size_kb": 64, 00:09:16.152 "state": "configuring", 00:09:16.152 "raid_level": "concat", 00:09:16.152 "superblock": true, 00:09:16.152 "num_base_bdevs": 2, 00:09:16.152 "num_base_bdevs_discovered": 1, 00:09:16.152 "num_base_bdevs_operational": 2, 00:09:16.152 "base_bdevs_list": [ 00:09:16.152 { 00:09:16.152 "name": "BaseBdev1", 00:09:16.152 "uuid": "26003b12-760c-4462-9f37-47581647e223", 00:09:16.152 "is_configured": true, 00:09:16.152 "data_offset": 2048, 00:09:16.152 "data_size": 63488 00:09:16.152 }, 00:09:16.152 { 00:09:16.152 "name": "BaseBdev2", 00:09:16.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.152 "is_configured": false, 00:09:16.152 "data_offset": 0, 00:09:16.152 "data_size": 0 00:09:16.152 } 00:09:16.152 ] 00:09:16.152 }' 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.152 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.427 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:16.427 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.427 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.427 [2024-12-06 04:00:09.755948] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.428 [2024-12-06 04:00:09.756003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:16.428 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.428 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:16.428 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.428 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.428 [2024-12-06 04:00:09.767987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:16.428 [2024-12-06 04:00:09.770078] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:16.428 [2024-12-06 04:00:09.770126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:16.428 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.428 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:16.428 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:16.428 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:16.428 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.428 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.428 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.428 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.428 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:16.428 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.428 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.428 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.428 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.428 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.428 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.428 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.428 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.688 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.688 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.688 "name": "Existed_Raid", 00:09:16.688 "uuid": "e27fa314-c6a1-4400-a4c1-bab43e54a443", 00:09:16.688 "strip_size_kb": 64, 00:09:16.688 "state": "configuring", 00:09:16.688 "raid_level": "concat", 00:09:16.688 "superblock": true, 00:09:16.688 "num_base_bdevs": 2, 00:09:16.688 "num_base_bdevs_discovered": 1, 00:09:16.688 "num_base_bdevs_operational": 2, 00:09:16.688 "base_bdevs_list": [ 00:09:16.688 { 00:09:16.688 "name": "BaseBdev1", 00:09:16.688 "uuid": "26003b12-760c-4462-9f37-47581647e223", 00:09:16.688 "is_configured": true, 00:09:16.688 "data_offset": 2048, 00:09:16.688 "data_size": 63488 00:09:16.688 }, 00:09:16.688 { 00:09:16.688 "name": "BaseBdev2", 00:09:16.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.688 "is_configured": false, 00:09:16.688 "data_offset": 0, 00:09:16.688 "data_size": 0 00:09:16.688 } 00:09:16.688 ] 00:09:16.688 }' 00:09:16.688 04:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.688 04:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.948 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:16.948 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.948 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.948 [2024-12-06 04:00:10.274948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:16.948 [2024-12-06 04:00:10.275366] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:16.948 [2024-12-06 04:00:10.275422] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:16.948 [2024-12-06 04:00:10.275703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:16.948 [2024-12-06 04:00:10.275903] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:16.948 [2024-12-06 04:00:10.275957] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:16.948 BaseBdev2 00:09:16.948 [2024-12-06 04:00:10.276171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.948 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.948 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:16.948 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:16.948 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.948 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:16.948 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.948 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.948 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:16.948 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.948 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.948 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.948 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:16.948 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.948 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.209 [ 00:09:17.209 { 00:09:17.209 "name": "BaseBdev2", 00:09:17.209 "aliases": [ 00:09:17.209 "55457cd5-8589-4e85-86f6-9a33d651456e" 00:09:17.209 ], 00:09:17.209 "product_name": "Malloc disk", 00:09:17.209 "block_size": 512, 00:09:17.209 "num_blocks": 65536, 00:09:17.209 "uuid": "55457cd5-8589-4e85-86f6-9a33d651456e", 00:09:17.209 "assigned_rate_limits": { 00:09:17.209 "rw_ios_per_sec": 0, 00:09:17.209 "rw_mbytes_per_sec": 0, 00:09:17.209 "r_mbytes_per_sec": 0, 00:09:17.209 "w_mbytes_per_sec": 0 00:09:17.209 }, 00:09:17.209 "claimed": true, 00:09:17.209 "claim_type": "exclusive_write", 00:09:17.209 "zoned": false, 00:09:17.209 "supported_io_types": { 00:09:17.209 "read": true, 00:09:17.209 "write": true, 00:09:17.209 "unmap": true, 00:09:17.209 "flush": true, 00:09:17.209 "reset": true, 00:09:17.209 "nvme_admin": false, 00:09:17.209 "nvme_io": false, 00:09:17.209 "nvme_io_md": false, 00:09:17.209 "write_zeroes": true, 00:09:17.209 "zcopy": true, 00:09:17.209 "get_zone_info": false, 00:09:17.209 "zone_management": false, 00:09:17.209 "zone_append": false, 00:09:17.209 "compare": false, 00:09:17.209 "compare_and_write": false, 00:09:17.209 "abort": true, 00:09:17.209 "seek_hole": false, 00:09:17.209 "seek_data": false, 00:09:17.209 "copy": true, 00:09:17.209 "nvme_iov_md": false 00:09:17.209 }, 00:09:17.209 "memory_domains": [ 00:09:17.209 { 00:09:17.209 "dma_device_id": "system", 00:09:17.209 "dma_device_type": 1 00:09:17.209 }, 00:09:17.209 { 00:09:17.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.209 "dma_device_type": 2 00:09:17.209 } 00:09:17.209 ], 00:09:17.209 "driver_specific": {} 00:09:17.209 } 00:09:17.209 ] 00:09:17.209 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.209 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:17.209 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:17.209 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:17.209 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:17.209 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.209 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.209 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.209 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.209 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:17.209 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.209 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.209 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.209 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.209 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.209 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.209 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.209 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.209 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.209 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.209 "name": "Existed_Raid", 00:09:17.209 "uuid": "e27fa314-c6a1-4400-a4c1-bab43e54a443", 00:09:17.209 "strip_size_kb": 64, 00:09:17.209 "state": "online", 00:09:17.209 "raid_level": "concat", 00:09:17.209 "superblock": true, 00:09:17.209 "num_base_bdevs": 2, 00:09:17.209 "num_base_bdevs_discovered": 2, 00:09:17.209 "num_base_bdevs_operational": 2, 00:09:17.209 "base_bdevs_list": [ 00:09:17.209 { 00:09:17.209 "name": "BaseBdev1", 00:09:17.209 "uuid": "26003b12-760c-4462-9f37-47581647e223", 00:09:17.209 "is_configured": true, 00:09:17.209 "data_offset": 2048, 00:09:17.209 "data_size": 63488 00:09:17.209 }, 00:09:17.209 { 00:09:17.209 "name": "BaseBdev2", 00:09:17.209 "uuid": "55457cd5-8589-4e85-86f6-9a33d651456e", 00:09:17.209 "is_configured": true, 00:09:17.209 "data_offset": 2048, 00:09:17.209 "data_size": 63488 00:09:17.209 } 00:09:17.209 ] 00:09:17.209 }' 00:09:17.209 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.209 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.470 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:17.470 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:17.470 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:17.470 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:17.470 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:17.470 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:17.470 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:17.470 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.470 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.470 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:17.470 [2024-12-06 04:00:10.766515] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.470 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.470 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:17.470 "name": "Existed_Raid", 00:09:17.470 "aliases": [ 00:09:17.470 "e27fa314-c6a1-4400-a4c1-bab43e54a443" 00:09:17.470 ], 00:09:17.470 "product_name": "Raid Volume", 00:09:17.470 "block_size": 512, 00:09:17.470 "num_blocks": 126976, 00:09:17.470 "uuid": "e27fa314-c6a1-4400-a4c1-bab43e54a443", 00:09:17.470 "assigned_rate_limits": { 00:09:17.470 "rw_ios_per_sec": 0, 00:09:17.470 "rw_mbytes_per_sec": 0, 00:09:17.470 "r_mbytes_per_sec": 0, 00:09:17.470 "w_mbytes_per_sec": 0 00:09:17.470 }, 00:09:17.470 "claimed": false, 00:09:17.470 "zoned": false, 00:09:17.470 "supported_io_types": { 00:09:17.470 "read": true, 00:09:17.470 "write": true, 00:09:17.470 "unmap": true, 00:09:17.470 "flush": true, 00:09:17.470 "reset": true, 00:09:17.470 "nvme_admin": false, 00:09:17.470 "nvme_io": false, 00:09:17.470 "nvme_io_md": false, 00:09:17.470 "write_zeroes": true, 00:09:17.470 "zcopy": false, 00:09:17.470 "get_zone_info": false, 00:09:17.470 "zone_management": false, 00:09:17.470 "zone_append": false, 00:09:17.470 "compare": false, 00:09:17.470 "compare_and_write": false, 00:09:17.470 "abort": false, 00:09:17.470 "seek_hole": false, 00:09:17.470 "seek_data": false, 00:09:17.470 "copy": false, 00:09:17.470 "nvme_iov_md": false 00:09:17.471 }, 00:09:17.471 "memory_domains": [ 00:09:17.471 { 00:09:17.471 "dma_device_id": "system", 00:09:17.471 "dma_device_type": 1 00:09:17.471 }, 00:09:17.471 { 00:09:17.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.471 "dma_device_type": 2 00:09:17.471 }, 00:09:17.471 { 00:09:17.471 "dma_device_id": "system", 00:09:17.471 "dma_device_type": 1 00:09:17.471 }, 00:09:17.471 { 00:09:17.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.471 "dma_device_type": 2 00:09:17.471 } 00:09:17.471 ], 00:09:17.471 "driver_specific": { 00:09:17.471 "raid": { 00:09:17.471 "uuid": "e27fa314-c6a1-4400-a4c1-bab43e54a443", 00:09:17.471 "strip_size_kb": 64, 00:09:17.471 "state": "online", 00:09:17.471 "raid_level": "concat", 00:09:17.471 "superblock": true, 00:09:17.471 "num_base_bdevs": 2, 00:09:17.471 "num_base_bdevs_discovered": 2, 00:09:17.471 "num_base_bdevs_operational": 2, 00:09:17.471 "base_bdevs_list": [ 00:09:17.471 { 00:09:17.471 "name": "BaseBdev1", 00:09:17.471 "uuid": "26003b12-760c-4462-9f37-47581647e223", 00:09:17.471 "is_configured": true, 00:09:17.471 "data_offset": 2048, 00:09:17.471 "data_size": 63488 00:09:17.471 }, 00:09:17.471 { 00:09:17.471 "name": "BaseBdev2", 00:09:17.471 "uuid": "55457cd5-8589-4e85-86f6-9a33d651456e", 00:09:17.471 "is_configured": true, 00:09:17.471 "data_offset": 2048, 00:09:17.471 "data_size": 63488 00:09:17.471 } 00:09:17.471 ] 00:09:17.471 } 00:09:17.471 } 00:09:17.471 }' 00:09:17.471 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:17.732 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:17.732 BaseBdev2' 00:09:17.732 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.732 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:17.732 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.732 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:17.732 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.732 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.732 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.732 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.732 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.732 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.732 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.732 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:17.732 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.732 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.732 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.732 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.732 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.732 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.732 04:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:17.732 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.732 04:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.732 [2024-12-06 04:00:10.997814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:17.732 [2024-12-06 04:00:10.997849] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.732 [2024-12-06 04:00:10.997901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.992 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.992 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:17.992 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:17.992 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:17.992 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:17.992 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:17.992 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:17.992 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.992 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:17.992 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.992 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.992 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:17.992 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.992 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.992 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.992 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.992 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.992 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.992 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.992 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.992 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.992 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.992 "name": "Existed_Raid", 00:09:17.992 "uuid": "e27fa314-c6a1-4400-a4c1-bab43e54a443", 00:09:17.992 "strip_size_kb": 64, 00:09:17.992 "state": "offline", 00:09:17.992 "raid_level": "concat", 00:09:17.992 "superblock": true, 00:09:17.992 "num_base_bdevs": 2, 00:09:17.992 "num_base_bdevs_discovered": 1, 00:09:17.992 "num_base_bdevs_operational": 1, 00:09:17.992 "base_bdevs_list": [ 00:09:17.992 { 00:09:17.992 "name": null, 00:09:17.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.993 "is_configured": false, 00:09:17.993 "data_offset": 0, 00:09:17.993 "data_size": 63488 00:09:17.993 }, 00:09:17.993 { 00:09:17.993 "name": "BaseBdev2", 00:09:17.993 "uuid": "55457cd5-8589-4e85-86f6-9a33d651456e", 00:09:17.993 "is_configured": true, 00:09:17.993 "data_offset": 2048, 00:09:17.993 "data_size": 63488 00:09:17.993 } 00:09:17.993 ] 00:09:17.993 }' 00:09:17.993 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.993 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.253 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:18.253 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:18.253 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:18.253 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.253 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.253 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.253 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.253 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:18.253 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:18.253 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:18.253 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.253 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.253 [2024-12-06 04:00:11.572748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:18.253 [2024-12-06 04:00:11.572808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:18.512 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.512 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:18.512 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:18.512 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.512 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:18.512 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.512 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.512 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.512 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:18.512 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:18.512 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:18.512 04:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62038 00:09:18.512 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62038 ']' 00:09:18.512 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62038 00:09:18.512 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:18.512 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.512 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62038 00:09:18.512 killing process with pid 62038 00:09:18.512 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:18.513 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:18.513 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62038' 00:09:18.513 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62038 00:09:18.513 [2024-12-06 04:00:11.759791] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:18.513 04:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62038 00:09:18.513 [2024-12-06 04:00:11.776975] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:19.894 04:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:19.894 00:09:19.894 real 0m5.013s 00:09:19.894 user 0m7.247s 00:09:19.894 sys 0m0.795s 00:09:19.894 04:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.894 04:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.894 ************************************ 00:09:19.894 END TEST raid_state_function_test_sb 00:09:19.894 ************************************ 00:09:19.894 04:00:12 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:09:19.894 04:00:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:19.894 04:00:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.894 04:00:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:19.894 ************************************ 00:09:19.894 START TEST raid_superblock_test 00:09:19.894 ************************************ 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62285 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62285 00:09:19.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62285 ']' 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.894 04:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.894 [2024-12-06 04:00:13.067245] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:09:19.894 [2024-12-06 04:00:13.067399] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62285 ] 00:09:19.894 [2024-12-06 04:00:13.240746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.154 [2024-12-06 04:00:13.347561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.414 [2024-12-06 04:00:13.543790] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.414 [2024-12-06 04:00:13.543928] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.674 malloc1 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.674 [2024-12-06 04:00:13.941125] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:20.674 [2024-12-06 04:00:13.941230] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.674 [2024-12-06 04:00:13.941272] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:20.674 [2024-12-06 04:00:13.941302] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.674 [2024-12-06 04:00:13.943454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.674 [2024-12-06 04:00:13.943528] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:20.674 pt1 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.674 malloc2 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.674 [2024-12-06 04:00:13.995764] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:20.674 [2024-12-06 04:00:13.995869] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.674 [2024-12-06 04:00:13.995913] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:20.674 [2024-12-06 04:00:13.995941] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.674 [2024-12-06 04:00:13.998094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.674 [2024-12-06 04:00:13.998161] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:20.674 pt2 00:09:20.674 04:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.674 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:20.674 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:20.674 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:20.674 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.674 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.674 [2024-12-06 04:00:14.007801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:20.674 [2024-12-06 04:00:14.009620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:20.674 [2024-12-06 04:00:14.009814] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:20.674 [2024-12-06 04:00:14.009860] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:20.674 [2024-12-06 04:00:14.010149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:20.674 [2024-12-06 04:00:14.010330] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:20.674 [2024-12-06 04:00:14.010374] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:20.674 [2024-12-06 04:00:14.010572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.674 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.674 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:20.674 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.674 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.674 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.674 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.674 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:20.674 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.674 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.674 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.674 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.674 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.674 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.674 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.674 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.934 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.934 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.934 "name": "raid_bdev1", 00:09:20.934 "uuid": "e10c2217-db23-464c-af64-433be876c5ab", 00:09:20.934 "strip_size_kb": 64, 00:09:20.934 "state": "online", 00:09:20.934 "raid_level": "concat", 00:09:20.934 "superblock": true, 00:09:20.934 "num_base_bdevs": 2, 00:09:20.934 "num_base_bdevs_discovered": 2, 00:09:20.934 "num_base_bdevs_operational": 2, 00:09:20.934 "base_bdevs_list": [ 00:09:20.934 { 00:09:20.934 "name": "pt1", 00:09:20.934 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.934 "is_configured": true, 00:09:20.934 "data_offset": 2048, 00:09:20.934 "data_size": 63488 00:09:20.934 }, 00:09:20.934 { 00:09:20.934 "name": "pt2", 00:09:20.934 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.934 "is_configured": true, 00:09:20.934 "data_offset": 2048, 00:09:20.934 "data_size": 63488 00:09:20.934 } 00:09:20.934 ] 00:09:20.934 }' 00:09:20.934 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.934 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.193 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:21.193 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:21.193 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.193 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.193 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.193 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.193 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:21.193 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:21.193 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.193 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.193 [2024-12-06 04:00:14.463371] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.193 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.193 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:21.193 "name": "raid_bdev1", 00:09:21.193 "aliases": [ 00:09:21.193 "e10c2217-db23-464c-af64-433be876c5ab" 00:09:21.193 ], 00:09:21.193 "product_name": "Raid Volume", 00:09:21.193 "block_size": 512, 00:09:21.193 "num_blocks": 126976, 00:09:21.193 "uuid": "e10c2217-db23-464c-af64-433be876c5ab", 00:09:21.193 "assigned_rate_limits": { 00:09:21.193 "rw_ios_per_sec": 0, 00:09:21.193 "rw_mbytes_per_sec": 0, 00:09:21.193 "r_mbytes_per_sec": 0, 00:09:21.193 "w_mbytes_per_sec": 0 00:09:21.193 }, 00:09:21.193 "claimed": false, 00:09:21.193 "zoned": false, 00:09:21.193 "supported_io_types": { 00:09:21.193 "read": true, 00:09:21.193 "write": true, 00:09:21.193 "unmap": true, 00:09:21.193 "flush": true, 00:09:21.193 "reset": true, 00:09:21.193 "nvme_admin": false, 00:09:21.193 "nvme_io": false, 00:09:21.193 "nvme_io_md": false, 00:09:21.193 "write_zeroes": true, 00:09:21.193 "zcopy": false, 00:09:21.193 "get_zone_info": false, 00:09:21.193 "zone_management": false, 00:09:21.193 "zone_append": false, 00:09:21.193 "compare": false, 00:09:21.193 "compare_and_write": false, 00:09:21.193 "abort": false, 00:09:21.193 "seek_hole": false, 00:09:21.193 "seek_data": false, 00:09:21.193 "copy": false, 00:09:21.193 "nvme_iov_md": false 00:09:21.193 }, 00:09:21.193 "memory_domains": [ 00:09:21.193 { 00:09:21.193 "dma_device_id": "system", 00:09:21.193 "dma_device_type": 1 00:09:21.193 }, 00:09:21.193 { 00:09:21.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.193 "dma_device_type": 2 00:09:21.193 }, 00:09:21.193 { 00:09:21.194 "dma_device_id": "system", 00:09:21.194 "dma_device_type": 1 00:09:21.194 }, 00:09:21.194 { 00:09:21.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.194 "dma_device_type": 2 00:09:21.194 } 00:09:21.194 ], 00:09:21.194 "driver_specific": { 00:09:21.194 "raid": { 00:09:21.194 "uuid": "e10c2217-db23-464c-af64-433be876c5ab", 00:09:21.194 "strip_size_kb": 64, 00:09:21.194 "state": "online", 00:09:21.194 "raid_level": "concat", 00:09:21.194 "superblock": true, 00:09:21.194 "num_base_bdevs": 2, 00:09:21.194 "num_base_bdevs_discovered": 2, 00:09:21.194 "num_base_bdevs_operational": 2, 00:09:21.194 "base_bdevs_list": [ 00:09:21.194 { 00:09:21.194 "name": "pt1", 00:09:21.194 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:21.194 "is_configured": true, 00:09:21.194 "data_offset": 2048, 00:09:21.194 "data_size": 63488 00:09:21.194 }, 00:09:21.194 { 00:09:21.194 "name": "pt2", 00:09:21.194 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:21.194 "is_configured": true, 00:09:21.194 "data_offset": 2048, 00:09:21.194 "data_size": 63488 00:09:21.194 } 00:09:21.194 ] 00:09:21.194 } 00:09:21.194 } 00:09:21.194 }' 00:09:21.194 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:21.453 pt2' 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:21.453 [2024-12-06 04:00:14.674979] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e10c2217-db23-464c-af64-433be876c5ab 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e10c2217-db23-464c-af64-433be876c5ab ']' 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.453 [2024-12-06 04:00:14.718607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:21.453 [2024-12-06 04:00:14.718636] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:21.453 [2024-12-06 04:00:14.718730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.453 [2024-12-06 04:00:14.718782] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.453 [2024-12-06 04:00:14.718795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.453 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.712 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.712 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:21.712 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:21.712 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:21.712 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:21.712 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:21.712 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:21.712 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:21.712 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:21.712 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:21.712 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.712 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.712 [2024-12-06 04:00:14.854411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:21.712 [2024-12-06 04:00:14.856344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:21.712 [2024-12-06 04:00:14.856457] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:21.712 [2024-12-06 04:00:14.856559] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:21.712 [2024-12-06 04:00:14.856611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:21.712 [2024-12-06 04:00:14.856642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:21.712 request: 00:09:21.712 { 00:09:21.712 "name": "raid_bdev1", 00:09:21.712 "raid_level": "concat", 00:09:21.712 "base_bdevs": [ 00:09:21.712 "malloc1", 00:09:21.712 "malloc2" 00:09:21.712 ], 00:09:21.712 "strip_size_kb": 64, 00:09:21.712 "superblock": false, 00:09:21.712 "method": "bdev_raid_create", 00:09:21.712 "req_id": 1 00:09:21.712 } 00:09:21.712 Got JSON-RPC error response 00:09:21.712 response: 00:09:21.712 { 00:09:21.712 "code": -17, 00:09:21.712 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:21.712 } 00:09:21.712 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:21.712 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:21.712 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:21.712 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:21.712 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.713 [2024-12-06 04:00:14.922255] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:21.713 [2024-12-06 04:00:14.922353] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.713 [2024-12-06 04:00:14.922387] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:21.713 [2024-12-06 04:00:14.922417] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.713 [2024-12-06 04:00:14.924566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.713 [2024-12-06 04:00:14.924644] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:21.713 [2024-12-06 04:00:14.924738] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:21.713 [2024-12-06 04:00:14.924831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:21.713 pt1 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.713 "name": "raid_bdev1", 00:09:21.713 "uuid": "e10c2217-db23-464c-af64-433be876c5ab", 00:09:21.713 "strip_size_kb": 64, 00:09:21.713 "state": "configuring", 00:09:21.713 "raid_level": "concat", 00:09:21.713 "superblock": true, 00:09:21.713 "num_base_bdevs": 2, 00:09:21.713 "num_base_bdevs_discovered": 1, 00:09:21.713 "num_base_bdevs_operational": 2, 00:09:21.713 "base_bdevs_list": [ 00:09:21.713 { 00:09:21.713 "name": "pt1", 00:09:21.713 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:21.713 "is_configured": true, 00:09:21.713 "data_offset": 2048, 00:09:21.713 "data_size": 63488 00:09:21.713 }, 00:09:21.713 { 00:09:21.713 "name": null, 00:09:21.713 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:21.713 "is_configured": false, 00:09:21.713 "data_offset": 2048, 00:09:21.713 "data_size": 63488 00:09:21.713 } 00:09:21.713 ] 00:09:21.713 }' 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.713 04:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.278 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:22.278 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:22.278 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:22.278 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:22.278 04:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.278 04:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.278 [2024-12-06 04:00:15.397477] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:22.278 [2024-12-06 04:00:15.397655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.278 [2024-12-06 04:00:15.397683] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:22.278 [2024-12-06 04:00:15.397695] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.278 [2024-12-06 04:00:15.398174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.278 [2024-12-06 04:00:15.398198] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:22.278 [2024-12-06 04:00:15.398281] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:22.278 [2024-12-06 04:00:15.398309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:22.278 [2024-12-06 04:00:15.398420] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:22.279 [2024-12-06 04:00:15.398432] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:22.279 [2024-12-06 04:00:15.398670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:22.279 [2024-12-06 04:00:15.398814] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:22.279 [2024-12-06 04:00:15.398823] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:22.279 [2024-12-06 04:00:15.398973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.279 pt2 00:09:22.279 04:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.279 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:22.279 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:22.279 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:22.279 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.279 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.279 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.279 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.279 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:22.279 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.279 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.279 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.279 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.279 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.279 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.279 04:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.279 04:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.279 04:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.279 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.279 "name": "raid_bdev1", 00:09:22.279 "uuid": "e10c2217-db23-464c-af64-433be876c5ab", 00:09:22.279 "strip_size_kb": 64, 00:09:22.279 "state": "online", 00:09:22.279 "raid_level": "concat", 00:09:22.279 "superblock": true, 00:09:22.279 "num_base_bdevs": 2, 00:09:22.279 "num_base_bdevs_discovered": 2, 00:09:22.279 "num_base_bdevs_operational": 2, 00:09:22.279 "base_bdevs_list": [ 00:09:22.279 { 00:09:22.279 "name": "pt1", 00:09:22.279 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:22.279 "is_configured": true, 00:09:22.279 "data_offset": 2048, 00:09:22.279 "data_size": 63488 00:09:22.279 }, 00:09:22.279 { 00:09:22.279 "name": "pt2", 00:09:22.279 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:22.279 "is_configured": true, 00:09:22.279 "data_offset": 2048, 00:09:22.279 "data_size": 63488 00:09:22.279 } 00:09:22.279 ] 00:09:22.279 }' 00:09:22.279 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.279 04:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.537 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:22.537 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:22.537 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:22.537 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:22.537 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:22.537 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:22.537 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:22.538 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:22.538 04:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.538 04:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.538 [2024-12-06 04:00:15.856931] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.538 04:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.538 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:22.538 "name": "raid_bdev1", 00:09:22.538 "aliases": [ 00:09:22.538 "e10c2217-db23-464c-af64-433be876c5ab" 00:09:22.538 ], 00:09:22.538 "product_name": "Raid Volume", 00:09:22.538 "block_size": 512, 00:09:22.538 "num_blocks": 126976, 00:09:22.538 "uuid": "e10c2217-db23-464c-af64-433be876c5ab", 00:09:22.538 "assigned_rate_limits": { 00:09:22.538 "rw_ios_per_sec": 0, 00:09:22.538 "rw_mbytes_per_sec": 0, 00:09:22.538 "r_mbytes_per_sec": 0, 00:09:22.538 "w_mbytes_per_sec": 0 00:09:22.538 }, 00:09:22.538 "claimed": false, 00:09:22.538 "zoned": false, 00:09:22.538 "supported_io_types": { 00:09:22.538 "read": true, 00:09:22.538 "write": true, 00:09:22.538 "unmap": true, 00:09:22.538 "flush": true, 00:09:22.538 "reset": true, 00:09:22.538 "nvme_admin": false, 00:09:22.538 "nvme_io": false, 00:09:22.538 "nvme_io_md": false, 00:09:22.538 "write_zeroes": true, 00:09:22.538 "zcopy": false, 00:09:22.538 "get_zone_info": false, 00:09:22.538 "zone_management": false, 00:09:22.538 "zone_append": false, 00:09:22.538 "compare": false, 00:09:22.538 "compare_and_write": false, 00:09:22.538 "abort": false, 00:09:22.538 "seek_hole": false, 00:09:22.538 "seek_data": false, 00:09:22.538 "copy": false, 00:09:22.538 "nvme_iov_md": false 00:09:22.538 }, 00:09:22.538 "memory_domains": [ 00:09:22.538 { 00:09:22.538 "dma_device_id": "system", 00:09:22.538 "dma_device_type": 1 00:09:22.538 }, 00:09:22.538 { 00:09:22.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.538 "dma_device_type": 2 00:09:22.538 }, 00:09:22.538 { 00:09:22.538 "dma_device_id": "system", 00:09:22.538 "dma_device_type": 1 00:09:22.538 }, 00:09:22.538 { 00:09:22.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.538 "dma_device_type": 2 00:09:22.538 } 00:09:22.538 ], 00:09:22.538 "driver_specific": { 00:09:22.538 "raid": { 00:09:22.538 "uuid": "e10c2217-db23-464c-af64-433be876c5ab", 00:09:22.538 "strip_size_kb": 64, 00:09:22.538 "state": "online", 00:09:22.538 "raid_level": "concat", 00:09:22.538 "superblock": true, 00:09:22.538 "num_base_bdevs": 2, 00:09:22.538 "num_base_bdevs_discovered": 2, 00:09:22.538 "num_base_bdevs_operational": 2, 00:09:22.538 "base_bdevs_list": [ 00:09:22.538 { 00:09:22.538 "name": "pt1", 00:09:22.538 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:22.538 "is_configured": true, 00:09:22.538 "data_offset": 2048, 00:09:22.538 "data_size": 63488 00:09:22.538 }, 00:09:22.538 { 00:09:22.538 "name": "pt2", 00:09:22.538 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:22.538 "is_configured": true, 00:09:22.538 "data_offset": 2048, 00:09:22.538 "data_size": 63488 00:09:22.538 } 00:09:22.538 ] 00:09:22.538 } 00:09:22.538 } 00:09:22.538 }' 00:09:22.796 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:22.796 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:22.796 pt2' 00:09:22.796 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.796 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:22.796 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.796 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:22.796 04:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.796 04:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.796 04:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.796 04:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:22.796 [2024-12-06 04:00:16.080566] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e10c2217-db23-464c-af64-433be876c5ab '!=' e10c2217-db23-464c-af64-433be876c5ab ']' 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62285 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62285 ']' 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62285 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62285 00:09:22.796 killing process with pid 62285 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62285' 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62285 00:09:22.796 [2024-12-06 04:00:16.146265] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:22.796 [2024-12-06 04:00:16.146380] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.796 [2024-12-06 04:00:16.146434] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:22.796 04:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62285 00:09:22.796 [2024-12-06 04:00:16.146447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:23.055 [2024-12-06 04:00:16.366361] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:24.476 ************************************ 00:09:24.476 END TEST raid_superblock_test 00:09:24.476 04:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:24.476 00:09:24.476 real 0m4.544s 00:09:24.476 user 0m6.380s 00:09:24.476 sys 0m0.711s 00:09:24.476 04:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.476 04:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.476 ************************************ 00:09:24.476 04:00:17 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:09:24.476 04:00:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:24.476 04:00:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.476 04:00:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:24.476 ************************************ 00:09:24.476 START TEST raid_read_error_test 00:09:24.476 ************************************ 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.m7PSnrjexs 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62496 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62496 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62496 ']' 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.476 04:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.476 [2024-12-06 04:00:17.688649] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:09:24.476 [2024-12-06 04:00:17.688858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62496 ] 00:09:24.735 [2024-12-06 04:00:17.864583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.735 [2024-12-06 04:00:17.983528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.994 [2024-12-06 04:00:18.191033] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.994 [2024-12-06 04:00:18.191195] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.253 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.253 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:25.253 04:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:25.253 04:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:25.253 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.253 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.253 BaseBdev1_malloc 00:09:25.253 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.253 04:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:25.253 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.253 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.253 true 00:09:25.253 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.253 04:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:25.253 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.253 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.253 [2024-12-06 04:00:18.602099] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:25.253 [2024-12-06 04:00:18.602204] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.253 [2024-12-06 04:00:18.602227] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:25.253 [2024-12-06 04:00:18.602238] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.253 [2024-12-06 04:00:18.604491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.253 [2024-12-06 04:00:18.604538] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:25.511 BaseBdev1 00:09:25.511 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.511 04:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:25.511 04:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:25.511 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.511 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.511 BaseBdev2_malloc 00:09:25.511 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.511 04:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:25.511 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.511 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.511 true 00:09:25.511 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.511 04:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:25.511 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.511 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.511 [2024-12-06 04:00:18.669556] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:25.512 [2024-12-06 04:00:18.669612] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.512 [2024-12-06 04:00:18.669628] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:25.512 [2024-12-06 04:00:18.669638] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.512 [2024-12-06 04:00:18.671674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.512 [2024-12-06 04:00:18.671712] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:25.512 BaseBdev2 00:09:25.512 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.512 04:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:25.512 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.512 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.512 [2024-12-06 04:00:18.681605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:25.512 [2024-12-06 04:00:18.683504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:25.512 [2024-12-06 04:00:18.683713] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:25.512 [2024-12-06 04:00:18.683729] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:25.512 [2024-12-06 04:00:18.683967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:25.512 [2024-12-06 04:00:18.684211] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:25.512 [2024-12-06 04:00:18.684245] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:25.512 [2024-12-06 04:00:18.684453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.512 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.512 04:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:25.512 04:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.512 04:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.512 04:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.512 04:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.512 04:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:25.512 04:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.512 04:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.512 04:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.512 04:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.512 04:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.512 04:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.512 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.512 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.512 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.512 04:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.512 "name": "raid_bdev1", 00:09:25.512 "uuid": "8934d65e-a017-43f1-8413-4aea9325b285", 00:09:25.512 "strip_size_kb": 64, 00:09:25.512 "state": "online", 00:09:25.512 "raid_level": "concat", 00:09:25.512 "superblock": true, 00:09:25.512 "num_base_bdevs": 2, 00:09:25.512 "num_base_bdevs_discovered": 2, 00:09:25.512 "num_base_bdevs_operational": 2, 00:09:25.512 "base_bdevs_list": [ 00:09:25.512 { 00:09:25.512 "name": "BaseBdev1", 00:09:25.512 "uuid": "29890225-5f05-53f3-8b5d-9f28dd32c20f", 00:09:25.512 "is_configured": true, 00:09:25.512 "data_offset": 2048, 00:09:25.512 "data_size": 63488 00:09:25.512 }, 00:09:25.512 { 00:09:25.512 "name": "BaseBdev2", 00:09:25.512 "uuid": "76c87bc0-33a7-596c-9251-7a55aed865e3", 00:09:25.512 "is_configured": true, 00:09:25.512 "data_offset": 2048, 00:09:25.512 "data_size": 63488 00:09:25.512 } 00:09:25.512 ] 00:09:25.512 }' 00:09:25.512 04:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.512 04:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.772 04:00:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:25.772 04:00:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:26.031 [2024-12-06 04:00:19.206176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.969 "name": "raid_bdev1", 00:09:26.969 "uuid": "8934d65e-a017-43f1-8413-4aea9325b285", 00:09:26.969 "strip_size_kb": 64, 00:09:26.969 "state": "online", 00:09:26.969 "raid_level": "concat", 00:09:26.969 "superblock": true, 00:09:26.969 "num_base_bdevs": 2, 00:09:26.969 "num_base_bdevs_discovered": 2, 00:09:26.969 "num_base_bdevs_operational": 2, 00:09:26.969 "base_bdevs_list": [ 00:09:26.969 { 00:09:26.969 "name": "BaseBdev1", 00:09:26.969 "uuid": "29890225-5f05-53f3-8b5d-9f28dd32c20f", 00:09:26.969 "is_configured": true, 00:09:26.969 "data_offset": 2048, 00:09:26.969 "data_size": 63488 00:09:26.969 }, 00:09:26.969 { 00:09:26.969 "name": "BaseBdev2", 00:09:26.969 "uuid": "76c87bc0-33a7-596c-9251-7a55aed865e3", 00:09:26.969 "is_configured": true, 00:09:26.969 "data_offset": 2048, 00:09:26.969 "data_size": 63488 00:09:26.969 } 00:09:26.969 ] 00:09:26.969 }' 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.969 04:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.230 04:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:27.230 04:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.230 04:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.230 [2024-12-06 04:00:20.578340] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:27.230 [2024-12-06 04:00:20.578465] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:27.230 [2024-12-06 04:00:20.581669] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.230 [2024-12-06 04:00:20.581758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.230 [2024-12-06 04:00:20.581814] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.230 [2024-12-06 04:00:20.581868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:27.491 { 00:09:27.491 "results": [ 00:09:27.491 { 00:09:27.491 "job": "raid_bdev1", 00:09:27.491 "core_mask": "0x1", 00:09:27.491 "workload": "randrw", 00:09:27.491 "percentage": 50, 00:09:27.491 "status": "finished", 00:09:27.491 "queue_depth": 1, 00:09:27.491 "io_size": 131072, 00:09:27.491 "runtime": 1.373203, 00:09:27.491 "iops": 15148.525017786882, 00:09:27.491 "mibps": 1893.5656272233603, 00:09:27.491 "io_failed": 1, 00:09:27.491 "io_timeout": 0, 00:09:27.491 "avg_latency_us": 91.14968579229524, 00:09:27.491 "min_latency_us": 27.94759825327511, 00:09:27.491 "max_latency_us": 1445.2262008733624 00:09:27.491 } 00:09:27.491 ], 00:09:27.491 "core_count": 1 00:09:27.491 } 00:09:27.491 04:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.491 04:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62496 00:09:27.491 04:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62496 ']' 00:09:27.491 04:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62496 00:09:27.491 04:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:27.491 04:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.491 04:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62496 00:09:27.491 04:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:27.491 04:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:27.491 04:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62496' 00:09:27.491 killing process with pid 62496 00:09:27.491 04:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62496 00:09:27.491 [2024-12-06 04:00:20.615556] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:27.491 04:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62496 00:09:27.491 [2024-12-06 04:00:20.756049] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:28.874 04:00:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.m7PSnrjexs 00:09:28.874 04:00:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:28.874 04:00:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:28.874 ************************************ 00:09:28.874 END TEST raid_read_error_test 00:09:28.874 ************************************ 00:09:28.874 04:00:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:28.874 04:00:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:28.874 04:00:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:28.874 04:00:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:28.874 04:00:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:28.874 00:09:28.874 real 0m4.413s 00:09:28.874 user 0m5.272s 00:09:28.874 sys 0m0.542s 00:09:28.874 04:00:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.874 04:00:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.874 04:00:22 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:09:28.874 04:00:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:28.874 04:00:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.874 04:00:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:28.874 ************************************ 00:09:28.874 START TEST raid_write_error_test 00:09:28.874 ************************************ 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WR8VukaD74 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62642 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62642 00:09:28.874 04:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:28.875 04:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62642 ']' 00:09:28.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.875 04:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.875 04:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.875 04:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.875 04:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.875 04:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.875 [2024-12-06 04:00:22.168248] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:09:28.875 [2024-12-06 04:00:22.168363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62642 ] 00:09:29.135 [2024-12-06 04:00:22.342322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.135 [2024-12-06 04:00:22.450135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.395 [2024-12-06 04:00:22.646226] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.395 [2024-12-06 04:00:22.646318] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.974 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.974 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:29.974 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:29.974 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:29.974 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.974 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.974 BaseBdev1_malloc 00:09:29.974 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.975 true 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.975 [2024-12-06 04:00:23.120365] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:29.975 [2024-12-06 04:00:23.120423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.975 [2024-12-06 04:00:23.120443] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:29.975 [2024-12-06 04:00:23.120454] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.975 [2024-12-06 04:00:23.122635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.975 [2024-12-06 04:00:23.122677] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:29.975 BaseBdev1 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.975 BaseBdev2_malloc 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.975 true 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.975 [2024-12-06 04:00:23.187840] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:29.975 [2024-12-06 04:00:23.187980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.975 [2024-12-06 04:00:23.188003] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:29.975 [2024-12-06 04:00:23.188016] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.975 [2024-12-06 04:00:23.190388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.975 [2024-12-06 04:00:23.190432] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:29.975 BaseBdev2 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.975 [2024-12-06 04:00:23.199907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.975 [2024-12-06 04:00:23.202039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:29.975 [2024-12-06 04:00:23.202293] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:29.975 [2024-12-06 04:00:23.202312] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:29.975 [2024-12-06 04:00:23.202591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:29.975 [2024-12-06 04:00:23.202802] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:29.975 [2024-12-06 04:00:23.202817] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:29.975 [2024-12-06 04:00:23.203007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.975 "name": "raid_bdev1", 00:09:29.975 "uuid": "13889a4a-bddf-41b9-a233-154c958f3b91", 00:09:29.975 "strip_size_kb": 64, 00:09:29.975 "state": "online", 00:09:29.975 "raid_level": "concat", 00:09:29.975 "superblock": true, 00:09:29.975 "num_base_bdevs": 2, 00:09:29.975 "num_base_bdevs_discovered": 2, 00:09:29.975 "num_base_bdevs_operational": 2, 00:09:29.975 "base_bdevs_list": [ 00:09:29.975 { 00:09:29.975 "name": "BaseBdev1", 00:09:29.975 "uuid": "169da145-8701-5882-969e-d2d0d1d2b635", 00:09:29.975 "is_configured": true, 00:09:29.975 "data_offset": 2048, 00:09:29.975 "data_size": 63488 00:09:29.975 }, 00:09:29.975 { 00:09:29.975 "name": "BaseBdev2", 00:09:29.975 "uuid": "ab9a56a8-332c-58c7-a9ac-12d9bfb7f699", 00:09:29.975 "is_configured": true, 00:09:29.975 "data_offset": 2048, 00:09:29.975 "data_size": 63488 00:09:29.975 } 00:09:29.975 ] 00:09:29.975 }' 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.975 04:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.559 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:30.559 04:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:30.559 [2024-12-06 04:00:23.740327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.497 "name": "raid_bdev1", 00:09:31.497 "uuid": "13889a4a-bddf-41b9-a233-154c958f3b91", 00:09:31.497 "strip_size_kb": 64, 00:09:31.497 "state": "online", 00:09:31.497 "raid_level": "concat", 00:09:31.497 "superblock": true, 00:09:31.497 "num_base_bdevs": 2, 00:09:31.497 "num_base_bdevs_discovered": 2, 00:09:31.497 "num_base_bdevs_operational": 2, 00:09:31.497 "base_bdevs_list": [ 00:09:31.497 { 00:09:31.497 "name": "BaseBdev1", 00:09:31.497 "uuid": "169da145-8701-5882-969e-d2d0d1d2b635", 00:09:31.497 "is_configured": true, 00:09:31.497 "data_offset": 2048, 00:09:31.497 "data_size": 63488 00:09:31.497 }, 00:09:31.497 { 00:09:31.497 "name": "BaseBdev2", 00:09:31.497 "uuid": "ab9a56a8-332c-58c7-a9ac-12d9bfb7f699", 00:09:31.497 "is_configured": true, 00:09:31.497 "data_offset": 2048, 00:09:31.497 "data_size": 63488 00:09:31.497 } 00:09:31.497 ] 00:09:31.497 }' 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.497 04:00:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.757 04:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:31.757 04:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.757 04:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.757 [2024-12-06 04:00:25.104738] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:31.757 [2024-12-06 04:00:25.104855] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.757 [2024-12-06 04:00:25.108036] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.757 [2024-12-06 04:00:25.108147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.757 [2024-12-06 04:00:25.108224] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.757 [2024-12-06 04:00:25.108280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:32.015 { 00:09:32.015 "results": [ 00:09:32.015 { 00:09:32.015 "job": "raid_bdev1", 00:09:32.015 "core_mask": "0x1", 00:09:32.015 "workload": "randrw", 00:09:32.015 "percentage": 50, 00:09:32.015 "status": "finished", 00:09:32.015 "queue_depth": 1, 00:09:32.015 "io_size": 131072, 00:09:32.015 "runtime": 1.365251, 00:09:32.015 "iops": 14492.57316053971, 00:09:32.015 "mibps": 1811.5716450674638, 00:09:32.015 "io_failed": 1, 00:09:32.015 "io_timeout": 0, 00:09:32.015 "avg_latency_us": 95.37017410089948, 00:09:32.015 "min_latency_us": 27.50043668122271, 00:09:32.015 "max_latency_us": 1538.235807860262 00:09:32.015 } 00:09:32.015 ], 00:09:32.015 "core_count": 1 00:09:32.015 } 00:09:32.015 04:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.015 04:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62642 00:09:32.015 04:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62642 ']' 00:09:32.015 04:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62642 00:09:32.015 04:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:32.015 04:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.015 04:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62642 00:09:32.015 killing process with pid 62642 00:09:32.015 04:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:32.015 04:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:32.015 04:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62642' 00:09:32.015 04:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62642 00:09:32.015 [2024-12-06 04:00:25.146317] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:32.015 04:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62642 00:09:32.015 [2024-12-06 04:00:25.289931] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.394 04:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:33.394 04:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WR8VukaD74 00:09:33.394 04:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:33.394 04:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:33.394 04:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:33.394 04:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:33.394 04:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:33.394 04:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:33.394 00:09:33.394 real 0m4.497s 00:09:33.394 user 0m5.409s 00:09:33.394 sys 0m0.534s 00:09:33.394 ************************************ 00:09:33.394 END TEST raid_write_error_test 00:09:33.394 ************************************ 00:09:33.394 04:00:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.394 04:00:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.394 04:00:26 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:33.394 04:00:26 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:09:33.394 04:00:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:33.394 04:00:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.394 04:00:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:33.394 ************************************ 00:09:33.394 START TEST raid_state_function_test 00:09:33.394 ************************************ 00:09:33.394 04:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:09:33.394 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:33.394 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:33.394 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:33.394 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:33.394 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:33.394 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.394 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:33.394 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:33.394 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.395 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:33.395 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:33.395 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.395 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:33.395 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:33.395 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:33.395 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:33.395 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:33.395 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:33.395 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:33.395 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:33.395 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:33.395 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:33.395 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:33.395 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62780 00:09:33.395 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62780' 00:09:33.395 Process raid pid: 62780 00:09:33.395 04:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62780 00:09:33.395 04:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62780 ']' 00:09:33.395 04:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.395 04:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.395 04:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.395 04:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.395 04:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.395 [2024-12-06 04:00:26.717222] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:09:33.395 [2024-12-06 04:00:26.717495] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.655 [2024-12-06 04:00:26.905517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.915 [2024-12-06 04:00:27.030341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.915 [2024-12-06 04:00:27.241627] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.915 [2024-12-06 04:00:27.241763] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.484 [2024-12-06 04:00:27.584364] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.484 [2024-12-06 04:00:27.584490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.484 [2024-12-06 04:00:27.584507] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.484 [2024-12-06 04:00:27.584519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.484 "name": "Existed_Raid", 00:09:34.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.484 "strip_size_kb": 0, 00:09:34.484 "state": "configuring", 00:09:34.484 "raid_level": "raid1", 00:09:34.484 "superblock": false, 00:09:34.484 "num_base_bdevs": 2, 00:09:34.484 "num_base_bdevs_discovered": 0, 00:09:34.484 "num_base_bdevs_operational": 2, 00:09:34.484 "base_bdevs_list": [ 00:09:34.484 { 00:09:34.484 "name": "BaseBdev1", 00:09:34.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.484 "is_configured": false, 00:09:34.484 "data_offset": 0, 00:09:34.484 "data_size": 0 00:09:34.484 }, 00:09:34.484 { 00:09:34.484 "name": "BaseBdev2", 00:09:34.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.484 "is_configured": false, 00:09:34.484 "data_offset": 0, 00:09:34.484 "data_size": 0 00:09:34.484 } 00:09:34.484 ] 00:09:34.484 }' 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.484 04:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.742 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.742 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.743 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.743 [2024-12-06 04:00:28.023653] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.743 [2024-12-06 04:00:28.023699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:34.743 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.743 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:34.743 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.743 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.743 [2024-12-06 04:00:28.035625] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.743 [2024-12-06 04:00:28.035682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.743 [2024-12-06 04:00:28.035694] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.743 [2024-12-06 04:00:28.035708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.743 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.743 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:34.743 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.743 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.743 [2024-12-06 04:00:28.090590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.743 BaseBdev1 00:09:34.743 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.743 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:34.743 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:34.743 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.743 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:34.743 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.743 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.002 [ 00:09:35.002 { 00:09:35.002 "name": "BaseBdev1", 00:09:35.002 "aliases": [ 00:09:35.002 "18818657-8ccb-4543-aa79-6d9cbea06b6f" 00:09:35.002 ], 00:09:35.002 "product_name": "Malloc disk", 00:09:35.002 "block_size": 512, 00:09:35.002 "num_blocks": 65536, 00:09:35.002 "uuid": "18818657-8ccb-4543-aa79-6d9cbea06b6f", 00:09:35.002 "assigned_rate_limits": { 00:09:35.002 "rw_ios_per_sec": 0, 00:09:35.002 "rw_mbytes_per_sec": 0, 00:09:35.002 "r_mbytes_per_sec": 0, 00:09:35.002 "w_mbytes_per_sec": 0 00:09:35.002 }, 00:09:35.002 "claimed": true, 00:09:35.002 "claim_type": "exclusive_write", 00:09:35.002 "zoned": false, 00:09:35.002 "supported_io_types": { 00:09:35.002 "read": true, 00:09:35.002 "write": true, 00:09:35.002 "unmap": true, 00:09:35.002 "flush": true, 00:09:35.002 "reset": true, 00:09:35.002 "nvme_admin": false, 00:09:35.002 "nvme_io": false, 00:09:35.002 "nvme_io_md": false, 00:09:35.002 "write_zeroes": true, 00:09:35.002 "zcopy": true, 00:09:35.002 "get_zone_info": false, 00:09:35.002 "zone_management": false, 00:09:35.002 "zone_append": false, 00:09:35.002 "compare": false, 00:09:35.002 "compare_and_write": false, 00:09:35.002 "abort": true, 00:09:35.002 "seek_hole": false, 00:09:35.002 "seek_data": false, 00:09:35.002 "copy": true, 00:09:35.002 "nvme_iov_md": false 00:09:35.002 }, 00:09:35.002 "memory_domains": [ 00:09:35.002 { 00:09:35.002 "dma_device_id": "system", 00:09:35.002 "dma_device_type": 1 00:09:35.002 }, 00:09:35.002 { 00:09:35.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.002 "dma_device_type": 2 00:09:35.002 } 00:09:35.002 ], 00:09:35.002 "driver_specific": {} 00:09:35.002 } 00:09:35.002 ] 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.002 "name": "Existed_Raid", 00:09:35.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.002 "strip_size_kb": 0, 00:09:35.002 "state": "configuring", 00:09:35.002 "raid_level": "raid1", 00:09:35.002 "superblock": false, 00:09:35.002 "num_base_bdevs": 2, 00:09:35.002 "num_base_bdevs_discovered": 1, 00:09:35.002 "num_base_bdevs_operational": 2, 00:09:35.002 "base_bdevs_list": [ 00:09:35.002 { 00:09:35.002 "name": "BaseBdev1", 00:09:35.002 "uuid": "18818657-8ccb-4543-aa79-6d9cbea06b6f", 00:09:35.002 "is_configured": true, 00:09:35.002 "data_offset": 0, 00:09:35.002 "data_size": 65536 00:09:35.002 }, 00:09:35.002 { 00:09:35.002 "name": "BaseBdev2", 00:09:35.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.002 "is_configured": false, 00:09:35.002 "data_offset": 0, 00:09:35.002 "data_size": 0 00:09:35.002 } 00:09:35.002 ] 00:09:35.002 }' 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.002 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.261 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:35.261 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.261 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.261 [2024-12-06 04:00:28.577910] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:35.261 [2024-12-06 04:00:28.577974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:35.261 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.261 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:35.262 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.262 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.262 [2024-12-06 04:00:28.589940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.262 [2024-12-06 04:00:28.592094] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:35.262 [2024-12-06 04:00:28.592143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:35.262 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.262 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:35.262 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.262 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:35.262 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.262 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.262 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.262 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.262 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.262 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.262 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.262 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.262 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.262 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.262 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.262 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.262 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.262 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.521 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.521 "name": "Existed_Raid", 00:09:35.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.521 "strip_size_kb": 0, 00:09:35.521 "state": "configuring", 00:09:35.521 "raid_level": "raid1", 00:09:35.521 "superblock": false, 00:09:35.521 "num_base_bdevs": 2, 00:09:35.521 "num_base_bdevs_discovered": 1, 00:09:35.521 "num_base_bdevs_operational": 2, 00:09:35.521 "base_bdevs_list": [ 00:09:35.521 { 00:09:35.521 "name": "BaseBdev1", 00:09:35.521 "uuid": "18818657-8ccb-4543-aa79-6d9cbea06b6f", 00:09:35.521 "is_configured": true, 00:09:35.521 "data_offset": 0, 00:09:35.521 "data_size": 65536 00:09:35.521 }, 00:09:35.521 { 00:09:35.521 "name": "BaseBdev2", 00:09:35.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.521 "is_configured": false, 00:09:35.521 "data_offset": 0, 00:09:35.521 "data_size": 0 00:09:35.521 } 00:09:35.521 ] 00:09:35.521 }' 00:09:35.521 04:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.521 04:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.781 [2024-12-06 04:00:29.089487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.781 [2024-12-06 04:00:29.089616] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:35.781 [2024-12-06 04:00:29.089656] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:35.781 [2024-12-06 04:00:29.089979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:35.781 [2024-12-06 04:00:29.090244] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:35.781 [2024-12-06 04:00:29.090300] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:35.781 [2024-12-06 04:00:29.090647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.781 BaseBdev2 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.781 [ 00:09:35.781 { 00:09:35.781 "name": "BaseBdev2", 00:09:35.781 "aliases": [ 00:09:35.781 "5472bc52-cd2e-4327-901a-b892886b1dbb" 00:09:35.781 ], 00:09:35.781 "product_name": "Malloc disk", 00:09:35.781 "block_size": 512, 00:09:35.781 "num_blocks": 65536, 00:09:35.781 "uuid": "5472bc52-cd2e-4327-901a-b892886b1dbb", 00:09:35.781 "assigned_rate_limits": { 00:09:35.781 "rw_ios_per_sec": 0, 00:09:35.781 "rw_mbytes_per_sec": 0, 00:09:35.781 "r_mbytes_per_sec": 0, 00:09:35.781 "w_mbytes_per_sec": 0 00:09:35.781 }, 00:09:35.781 "claimed": true, 00:09:35.781 "claim_type": "exclusive_write", 00:09:35.781 "zoned": false, 00:09:35.781 "supported_io_types": { 00:09:35.781 "read": true, 00:09:35.781 "write": true, 00:09:35.781 "unmap": true, 00:09:35.781 "flush": true, 00:09:35.781 "reset": true, 00:09:35.781 "nvme_admin": false, 00:09:35.781 "nvme_io": false, 00:09:35.781 "nvme_io_md": false, 00:09:35.781 "write_zeroes": true, 00:09:35.781 "zcopy": true, 00:09:35.781 "get_zone_info": false, 00:09:35.781 "zone_management": false, 00:09:35.781 "zone_append": false, 00:09:35.781 "compare": false, 00:09:35.781 "compare_and_write": false, 00:09:35.781 "abort": true, 00:09:35.781 "seek_hole": false, 00:09:35.781 "seek_data": false, 00:09:35.781 "copy": true, 00:09:35.781 "nvme_iov_md": false 00:09:35.781 }, 00:09:35.781 "memory_domains": [ 00:09:35.781 { 00:09:35.781 "dma_device_id": "system", 00:09:35.781 "dma_device_type": 1 00:09:35.781 }, 00:09:35.781 { 00:09:35.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.781 "dma_device_type": 2 00:09:35.781 } 00:09:35.781 ], 00:09:35.781 "driver_specific": {} 00:09:35.781 } 00:09:35.781 ] 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.781 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.782 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.782 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.782 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.782 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.782 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.782 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.782 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.043 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.043 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.043 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.043 "name": "Existed_Raid", 00:09:36.043 "uuid": "a4506ced-74dd-4593-86c4-6eac5e9312dd", 00:09:36.043 "strip_size_kb": 0, 00:09:36.043 "state": "online", 00:09:36.043 "raid_level": "raid1", 00:09:36.043 "superblock": false, 00:09:36.043 "num_base_bdevs": 2, 00:09:36.043 "num_base_bdevs_discovered": 2, 00:09:36.043 "num_base_bdevs_operational": 2, 00:09:36.043 "base_bdevs_list": [ 00:09:36.043 { 00:09:36.043 "name": "BaseBdev1", 00:09:36.043 "uuid": "18818657-8ccb-4543-aa79-6d9cbea06b6f", 00:09:36.043 "is_configured": true, 00:09:36.044 "data_offset": 0, 00:09:36.044 "data_size": 65536 00:09:36.044 }, 00:09:36.044 { 00:09:36.044 "name": "BaseBdev2", 00:09:36.044 "uuid": "5472bc52-cd2e-4327-901a-b892886b1dbb", 00:09:36.044 "is_configured": true, 00:09:36.044 "data_offset": 0, 00:09:36.044 "data_size": 65536 00:09:36.044 } 00:09:36.044 ] 00:09:36.044 }' 00:09:36.044 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.044 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.301 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:36.301 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:36.301 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:36.301 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:36.301 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:36.301 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:36.301 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:36.301 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:36.301 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.301 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.301 [2024-12-06 04:00:29.569107] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.301 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.301 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:36.301 "name": "Existed_Raid", 00:09:36.301 "aliases": [ 00:09:36.301 "a4506ced-74dd-4593-86c4-6eac5e9312dd" 00:09:36.301 ], 00:09:36.301 "product_name": "Raid Volume", 00:09:36.301 "block_size": 512, 00:09:36.301 "num_blocks": 65536, 00:09:36.301 "uuid": "a4506ced-74dd-4593-86c4-6eac5e9312dd", 00:09:36.301 "assigned_rate_limits": { 00:09:36.301 "rw_ios_per_sec": 0, 00:09:36.301 "rw_mbytes_per_sec": 0, 00:09:36.301 "r_mbytes_per_sec": 0, 00:09:36.301 "w_mbytes_per_sec": 0 00:09:36.301 }, 00:09:36.301 "claimed": false, 00:09:36.301 "zoned": false, 00:09:36.301 "supported_io_types": { 00:09:36.301 "read": true, 00:09:36.301 "write": true, 00:09:36.301 "unmap": false, 00:09:36.301 "flush": false, 00:09:36.301 "reset": true, 00:09:36.301 "nvme_admin": false, 00:09:36.301 "nvme_io": false, 00:09:36.301 "nvme_io_md": false, 00:09:36.301 "write_zeroes": true, 00:09:36.301 "zcopy": false, 00:09:36.301 "get_zone_info": false, 00:09:36.301 "zone_management": false, 00:09:36.301 "zone_append": false, 00:09:36.301 "compare": false, 00:09:36.301 "compare_and_write": false, 00:09:36.301 "abort": false, 00:09:36.301 "seek_hole": false, 00:09:36.301 "seek_data": false, 00:09:36.301 "copy": false, 00:09:36.301 "nvme_iov_md": false 00:09:36.301 }, 00:09:36.301 "memory_domains": [ 00:09:36.301 { 00:09:36.301 "dma_device_id": "system", 00:09:36.301 "dma_device_type": 1 00:09:36.301 }, 00:09:36.301 { 00:09:36.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.301 "dma_device_type": 2 00:09:36.301 }, 00:09:36.301 { 00:09:36.301 "dma_device_id": "system", 00:09:36.301 "dma_device_type": 1 00:09:36.301 }, 00:09:36.301 { 00:09:36.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.301 "dma_device_type": 2 00:09:36.301 } 00:09:36.301 ], 00:09:36.301 "driver_specific": { 00:09:36.301 "raid": { 00:09:36.301 "uuid": "a4506ced-74dd-4593-86c4-6eac5e9312dd", 00:09:36.301 "strip_size_kb": 0, 00:09:36.301 "state": "online", 00:09:36.301 "raid_level": "raid1", 00:09:36.301 "superblock": false, 00:09:36.301 "num_base_bdevs": 2, 00:09:36.301 "num_base_bdevs_discovered": 2, 00:09:36.301 "num_base_bdevs_operational": 2, 00:09:36.301 "base_bdevs_list": [ 00:09:36.301 { 00:09:36.301 "name": "BaseBdev1", 00:09:36.301 "uuid": "18818657-8ccb-4543-aa79-6d9cbea06b6f", 00:09:36.301 "is_configured": true, 00:09:36.301 "data_offset": 0, 00:09:36.301 "data_size": 65536 00:09:36.301 }, 00:09:36.301 { 00:09:36.301 "name": "BaseBdev2", 00:09:36.301 "uuid": "5472bc52-cd2e-4327-901a-b892886b1dbb", 00:09:36.301 "is_configured": true, 00:09:36.301 "data_offset": 0, 00:09:36.301 "data_size": 65536 00:09:36.301 } 00:09:36.301 ] 00:09:36.301 } 00:09:36.301 } 00:09:36.301 }' 00:09:36.301 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:36.301 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:36.301 BaseBdev2' 00:09:36.301 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.559 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:36.559 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.559 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:36.559 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.559 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.559 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.559 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.559 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.559 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.559 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.560 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:36.560 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.560 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.560 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.560 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.560 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.560 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.560 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:36.560 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.560 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.560 [2024-12-06 04:00:29.820430] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.818 "name": "Existed_Raid", 00:09:36.818 "uuid": "a4506ced-74dd-4593-86c4-6eac5e9312dd", 00:09:36.818 "strip_size_kb": 0, 00:09:36.818 "state": "online", 00:09:36.818 "raid_level": "raid1", 00:09:36.818 "superblock": false, 00:09:36.818 "num_base_bdevs": 2, 00:09:36.818 "num_base_bdevs_discovered": 1, 00:09:36.818 "num_base_bdevs_operational": 1, 00:09:36.818 "base_bdevs_list": [ 00:09:36.818 { 00:09:36.818 "name": null, 00:09:36.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.818 "is_configured": false, 00:09:36.818 "data_offset": 0, 00:09:36.818 "data_size": 65536 00:09:36.818 }, 00:09:36.818 { 00:09:36.818 "name": "BaseBdev2", 00:09:36.818 "uuid": "5472bc52-cd2e-4327-901a-b892886b1dbb", 00:09:36.818 "is_configured": true, 00:09:36.818 "data_offset": 0, 00:09:36.818 "data_size": 65536 00:09:36.818 } 00:09:36.818 ] 00:09:36.818 }' 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.818 04:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.077 04:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:37.077 04:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.077 04:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:37.077 04:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.077 04:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.077 04:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.077 04:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.077 04:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:37.077 04:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:37.077 04:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:37.077 04:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.077 04:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.077 [2024-12-06 04:00:30.373880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:37.077 [2024-12-06 04:00:30.373990] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.336 [2024-12-06 04:00:30.484446] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.336 [2024-12-06 04:00:30.484603] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.336 [2024-12-06 04:00:30.484624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:37.336 04:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.336 04:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:37.336 04:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.336 04:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.336 04:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:37.336 04:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.336 04:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.336 04:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.336 04:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:37.336 04:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:37.336 04:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:37.336 04:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62780 00:09:37.336 04:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62780 ']' 00:09:37.336 04:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62780 00:09:37.336 04:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:37.336 04:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.336 04:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62780 00:09:37.336 04:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:37.336 04:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:37.336 04:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62780' 00:09:37.336 killing process with pid 62780 00:09:37.336 04:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62780 00:09:37.336 [2024-12-06 04:00:30.574861] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:37.337 04:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62780 00:09:37.337 [2024-12-06 04:00:30.594308] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:38.716 00:09:38.716 real 0m5.279s 00:09:38.716 user 0m7.496s 00:09:38.716 sys 0m0.833s 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.716 ************************************ 00:09:38.716 END TEST raid_state_function_test 00:09:38.716 ************************************ 00:09:38.716 04:00:31 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:09:38.716 04:00:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:38.716 04:00:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.716 04:00:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:38.716 ************************************ 00:09:38.716 START TEST raid_state_function_test_sb 00:09:38.716 ************************************ 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63033 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63033' 00:09:38.716 Process raid pid: 63033 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63033 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63033 ']' 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.716 04:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.716 [2024-12-06 04:00:32.068360] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:09:38.716 [2024-12-06 04:00:32.068579] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.975 [2024-12-06 04:00:32.249442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.235 [2024-12-06 04:00:32.381826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.494 [2024-12-06 04:00:32.612608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.494 [2024-12-06 04:00:32.612752] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.753 04:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.753 04:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:39.753 04:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:39.753 04:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.753 04:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.753 [2024-12-06 04:00:32.988262] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.753 [2024-12-06 04:00:32.988330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.753 [2024-12-06 04:00:32.988343] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.753 [2024-12-06 04:00:32.988355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.753 04:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.753 04:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:39.753 04:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.753 04:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.753 04:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.753 04:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.753 04:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:39.753 04:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.753 04:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.753 04:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.753 04:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.753 04:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.753 04:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.753 04:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.753 04:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.753 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.753 04:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.753 "name": "Existed_Raid", 00:09:39.753 "uuid": "52cd05a5-8274-45bb-a2da-be4b5b52f439", 00:09:39.753 "strip_size_kb": 0, 00:09:39.753 "state": "configuring", 00:09:39.753 "raid_level": "raid1", 00:09:39.753 "superblock": true, 00:09:39.753 "num_base_bdevs": 2, 00:09:39.753 "num_base_bdevs_discovered": 0, 00:09:39.753 "num_base_bdevs_operational": 2, 00:09:39.753 "base_bdevs_list": [ 00:09:39.753 { 00:09:39.753 "name": "BaseBdev1", 00:09:39.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.753 "is_configured": false, 00:09:39.753 "data_offset": 0, 00:09:39.753 "data_size": 0 00:09:39.753 }, 00:09:39.753 { 00:09:39.753 "name": "BaseBdev2", 00:09:39.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.753 "is_configured": false, 00:09:39.753 "data_offset": 0, 00:09:39.753 "data_size": 0 00:09:39.753 } 00:09:39.753 ] 00:09:39.753 }' 00:09:39.753 04:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.753 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.321 [2024-12-06 04:00:33.447382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.321 [2024-12-06 04:00:33.447425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.321 [2024-12-06 04:00:33.459387] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:40.321 [2024-12-06 04:00:33.459436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:40.321 [2024-12-06 04:00:33.459446] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:40.321 [2024-12-06 04:00:33.459459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.321 [2024-12-06 04:00:33.511672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.321 BaseBdev1 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.321 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.321 [ 00:09:40.321 { 00:09:40.321 "name": "BaseBdev1", 00:09:40.321 "aliases": [ 00:09:40.321 "043aad05-bf3d-4911-8f71-b0c4adc7f930" 00:09:40.321 ], 00:09:40.321 "product_name": "Malloc disk", 00:09:40.321 "block_size": 512, 00:09:40.321 "num_blocks": 65536, 00:09:40.321 "uuid": "043aad05-bf3d-4911-8f71-b0c4adc7f930", 00:09:40.321 "assigned_rate_limits": { 00:09:40.321 "rw_ios_per_sec": 0, 00:09:40.321 "rw_mbytes_per_sec": 0, 00:09:40.321 "r_mbytes_per_sec": 0, 00:09:40.321 "w_mbytes_per_sec": 0 00:09:40.321 }, 00:09:40.321 "claimed": true, 00:09:40.321 "claim_type": "exclusive_write", 00:09:40.321 "zoned": false, 00:09:40.321 "supported_io_types": { 00:09:40.321 "read": true, 00:09:40.321 "write": true, 00:09:40.321 "unmap": true, 00:09:40.321 "flush": true, 00:09:40.321 "reset": true, 00:09:40.321 "nvme_admin": false, 00:09:40.321 "nvme_io": false, 00:09:40.321 "nvme_io_md": false, 00:09:40.321 "write_zeroes": true, 00:09:40.321 "zcopy": true, 00:09:40.321 "get_zone_info": false, 00:09:40.321 "zone_management": false, 00:09:40.321 "zone_append": false, 00:09:40.321 "compare": false, 00:09:40.321 "compare_and_write": false, 00:09:40.321 "abort": true, 00:09:40.321 "seek_hole": false, 00:09:40.321 "seek_data": false, 00:09:40.321 "copy": true, 00:09:40.321 "nvme_iov_md": false 00:09:40.321 }, 00:09:40.321 "memory_domains": [ 00:09:40.322 { 00:09:40.322 "dma_device_id": "system", 00:09:40.322 "dma_device_type": 1 00:09:40.322 }, 00:09:40.322 { 00:09:40.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.322 "dma_device_type": 2 00:09:40.322 } 00:09:40.322 ], 00:09:40.322 "driver_specific": {} 00:09:40.322 } 00:09:40.322 ] 00:09:40.322 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.322 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:40.322 04:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:40.322 04:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.322 04:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.322 04:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.322 04:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.322 04:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:40.322 04:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.322 04:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.322 04:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.322 04:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.322 04:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.322 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.322 04:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.322 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.322 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.322 04:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.322 "name": "Existed_Raid", 00:09:40.322 "uuid": "ca803e77-3805-4786-90cd-9b50fa750de9", 00:09:40.322 "strip_size_kb": 0, 00:09:40.322 "state": "configuring", 00:09:40.322 "raid_level": "raid1", 00:09:40.322 "superblock": true, 00:09:40.322 "num_base_bdevs": 2, 00:09:40.322 "num_base_bdevs_discovered": 1, 00:09:40.322 "num_base_bdevs_operational": 2, 00:09:40.322 "base_bdevs_list": [ 00:09:40.322 { 00:09:40.322 "name": "BaseBdev1", 00:09:40.322 "uuid": "043aad05-bf3d-4911-8f71-b0c4adc7f930", 00:09:40.322 "is_configured": true, 00:09:40.322 "data_offset": 2048, 00:09:40.322 "data_size": 63488 00:09:40.322 }, 00:09:40.322 { 00:09:40.322 "name": "BaseBdev2", 00:09:40.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.322 "is_configured": false, 00:09:40.322 "data_offset": 0, 00:09:40.322 "data_size": 0 00:09:40.322 } 00:09:40.322 ] 00:09:40.322 }' 00:09:40.322 04:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.322 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.892 04:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.892 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.892 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.892 [2024-12-06 04:00:33.986953] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.892 [2024-12-06 04:00:33.987099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:40.892 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.892 04:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:40.892 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.892 04:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.892 [2024-12-06 04:00:33.998993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.892 [2024-12-06 04:00:34.001248] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:40.892 [2024-12-06 04:00:34.001347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:40.892 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.892 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:40.892 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:40.892 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:40.892 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.892 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.892 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.892 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.892 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:40.892 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.892 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.892 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.892 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.892 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.892 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.892 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.892 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.892 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.892 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.892 "name": "Existed_Raid", 00:09:40.892 "uuid": "3bdd6afe-7dcc-4cba-b712-6deecbc6a9cb", 00:09:40.892 "strip_size_kb": 0, 00:09:40.892 "state": "configuring", 00:09:40.892 "raid_level": "raid1", 00:09:40.892 "superblock": true, 00:09:40.892 "num_base_bdevs": 2, 00:09:40.892 "num_base_bdevs_discovered": 1, 00:09:40.892 "num_base_bdevs_operational": 2, 00:09:40.892 "base_bdevs_list": [ 00:09:40.892 { 00:09:40.892 "name": "BaseBdev1", 00:09:40.892 "uuid": "043aad05-bf3d-4911-8f71-b0c4adc7f930", 00:09:40.892 "is_configured": true, 00:09:40.892 "data_offset": 2048, 00:09:40.892 "data_size": 63488 00:09:40.892 }, 00:09:40.892 { 00:09:40.892 "name": "BaseBdev2", 00:09:40.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.892 "is_configured": false, 00:09:40.892 "data_offset": 0, 00:09:40.892 "data_size": 0 00:09:40.892 } 00:09:40.892 ] 00:09:40.892 }' 00:09:40.893 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.893 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.202 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:41.202 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.202 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.202 [2024-12-06 04:00:34.495532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:41.202 [2024-12-06 04:00:34.495870] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:41.202 [2024-12-06 04:00:34.495923] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:41.202 [2024-12-06 04:00:34.496247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:41.202 [2024-12-06 04:00:34.496491] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:41.202 [2024-12-06 04:00:34.496546] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev2 00:09:41.202 id_bdev 0x617000007e80 00:09:41.202 [2024-12-06 04:00:34.496773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.202 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.202 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:41.202 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:41.202 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.202 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:41.202 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.202 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.202 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:41.202 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.202 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.202 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.202 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:41.202 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.202 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.202 [ 00:09:41.202 { 00:09:41.202 "name": "BaseBdev2", 00:09:41.202 "aliases": [ 00:09:41.202 "a77defe7-4c64-4962-be1f-a164319545da" 00:09:41.202 ], 00:09:41.202 "product_name": "Malloc disk", 00:09:41.202 "block_size": 512, 00:09:41.202 "num_blocks": 65536, 00:09:41.202 "uuid": "a77defe7-4c64-4962-be1f-a164319545da", 00:09:41.202 "assigned_rate_limits": { 00:09:41.202 "rw_ios_per_sec": 0, 00:09:41.202 "rw_mbytes_per_sec": 0, 00:09:41.202 "r_mbytes_per_sec": 0, 00:09:41.202 "w_mbytes_per_sec": 0 00:09:41.202 }, 00:09:41.202 "claimed": true, 00:09:41.202 "claim_type": "exclusive_write", 00:09:41.202 "zoned": false, 00:09:41.202 "supported_io_types": { 00:09:41.202 "read": true, 00:09:41.202 "write": true, 00:09:41.202 "unmap": true, 00:09:41.202 "flush": true, 00:09:41.202 "reset": true, 00:09:41.202 "nvme_admin": false, 00:09:41.202 "nvme_io": false, 00:09:41.202 "nvme_io_md": false, 00:09:41.202 "write_zeroes": true, 00:09:41.202 "zcopy": true, 00:09:41.202 "get_zone_info": false, 00:09:41.203 "zone_management": false, 00:09:41.203 "zone_append": false, 00:09:41.203 "compare": false, 00:09:41.203 "compare_and_write": false, 00:09:41.203 "abort": true, 00:09:41.203 "seek_hole": false, 00:09:41.203 "seek_data": false, 00:09:41.203 "copy": true, 00:09:41.203 "nvme_iov_md": false 00:09:41.203 }, 00:09:41.203 "memory_domains": [ 00:09:41.203 { 00:09:41.480 "dma_device_id": "system", 00:09:41.480 "dma_device_type": 1 00:09:41.480 }, 00:09:41.480 { 00:09:41.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.480 "dma_device_type": 2 00:09:41.480 } 00:09:41.480 ], 00:09:41.480 "driver_specific": {} 00:09:41.480 } 00:09:41.480 ] 00:09:41.480 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.480 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:41.480 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:41.480 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:41.480 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:41.480 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.480 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.480 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.480 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.480 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:41.480 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.480 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.480 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.480 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.480 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.480 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.480 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.480 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.480 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.480 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.480 "name": "Existed_Raid", 00:09:41.480 "uuid": "3bdd6afe-7dcc-4cba-b712-6deecbc6a9cb", 00:09:41.480 "strip_size_kb": 0, 00:09:41.480 "state": "online", 00:09:41.480 "raid_level": "raid1", 00:09:41.480 "superblock": true, 00:09:41.480 "num_base_bdevs": 2, 00:09:41.480 "num_base_bdevs_discovered": 2, 00:09:41.480 "num_base_bdevs_operational": 2, 00:09:41.480 "base_bdevs_list": [ 00:09:41.480 { 00:09:41.480 "name": "BaseBdev1", 00:09:41.480 "uuid": "043aad05-bf3d-4911-8f71-b0c4adc7f930", 00:09:41.480 "is_configured": true, 00:09:41.480 "data_offset": 2048, 00:09:41.480 "data_size": 63488 00:09:41.480 }, 00:09:41.480 { 00:09:41.480 "name": "BaseBdev2", 00:09:41.480 "uuid": "a77defe7-4c64-4962-be1f-a164319545da", 00:09:41.480 "is_configured": true, 00:09:41.480 "data_offset": 2048, 00:09:41.480 "data_size": 63488 00:09:41.480 } 00:09:41.480 ] 00:09:41.480 }' 00:09:41.480 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.480 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.740 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:41.740 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:41.740 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:41.740 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:41.740 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:41.740 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:41.740 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:41.740 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:41.740 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.740 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.740 [2024-12-06 04:00:34.951210] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.740 04:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.740 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:41.740 "name": "Existed_Raid", 00:09:41.740 "aliases": [ 00:09:41.740 "3bdd6afe-7dcc-4cba-b712-6deecbc6a9cb" 00:09:41.740 ], 00:09:41.740 "product_name": "Raid Volume", 00:09:41.740 "block_size": 512, 00:09:41.740 "num_blocks": 63488, 00:09:41.740 "uuid": "3bdd6afe-7dcc-4cba-b712-6deecbc6a9cb", 00:09:41.740 "assigned_rate_limits": { 00:09:41.740 "rw_ios_per_sec": 0, 00:09:41.740 "rw_mbytes_per_sec": 0, 00:09:41.740 "r_mbytes_per_sec": 0, 00:09:41.740 "w_mbytes_per_sec": 0 00:09:41.740 }, 00:09:41.740 "claimed": false, 00:09:41.740 "zoned": false, 00:09:41.740 "supported_io_types": { 00:09:41.740 "read": true, 00:09:41.740 "write": true, 00:09:41.740 "unmap": false, 00:09:41.740 "flush": false, 00:09:41.740 "reset": true, 00:09:41.740 "nvme_admin": false, 00:09:41.740 "nvme_io": false, 00:09:41.740 "nvme_io_md": false, 00:09:41.740 "write_zeroes": true, 00:09:41.740 "zcopy": false, 00:09:41.740 "get_zone_info": false, 00:09:41.740 "zone_management": false, 00:09:41.740 "zone_append": false, 00:09:41.740 "compare": false, 00:09:41.740 "compare_and_write": false, 00:09:41.740 "abort": false, 00:09:41.740 "seek_hole": false, 00:09:41.740 "seek_data": false, 00:09:41.740 "copy": false, 00:09:41.740 "nvme_iov_md": false 00:09:41.740 }, 00:09:41.740 "memory_domains": [ 00:09:41.740 { 00:09:41.740 "dma_device_id": "system", 00:09:41.740 "dma_device_type": 1 00:09:41.740 }, 00:09:41.740 { 00:09:41.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.740 "dma_device_type": 2 00:09:41.740 }, 00:09:41.740 { 00:09:41.740 "dma_device_id": "system", 00:09:41.740 "dma_device_type": 1 00:09:41.740 }, 00:09:41.740 { 00:09:41.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.740 "dma_device_type": 2 00:09:41.740 } 00:09:41.740 ], 00:09:41.740 "driver_specific": { 00:09:41.740 "raid": { 00:09:41.740 "uuid": "3bdd6afe-7dcc-4cba-b712-6deecbc6a9cb", 00:09:41.740 "strip_size_kb": 0, 00:09:41.740 "state": "online", 00:09:41.740 "raid_level": "raid1", 00:09:41.740 "superblock": true, 00:09:41.740 "num_base_bdevs": 2, 00:09:41.740 "num_base_bdevs_discovered": 2, 00:09:41.740 "num_base_bdevs_operational": 2, 00:09:41.740 "base_bdevs_list": [ 00:09:41.740 { 00:09:41.740 "name": "BaseBdev1", 00:09:41.740 "uuid": "043aad05-bf3d-4911-8f71-b0c4adc7f930", 00:09:41.740 "is_configured": true, 00:09:41.740 "data_offset": 2048, 00:09:41.740 "data_size": 63488 00:09:41.740 }, 00:09:41.740 { 00:09:41.740 "name": "BaseBdev2", 00:09:41.740 "uuid": "a77defe7-4c64-4962-be1f-a164319545da", 00:09:41.740 "is_configured": true, 00:09:41.741 "data_offset": 2048, 00:09:41.741 "data_size": 63488 00:09:41.741 } 00:09:41.741 ] 00:09:41.741 } 00:09:41.741 } 00:09:41.741 }' 00:09:41.741 04:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.741 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:41.741 BaseBdev2' 00:09:41.741 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.741 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.741 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.741 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.741 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:41.741 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.741 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.741 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.999 [2024-12-06 04:00:35.162529] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.999 "name": "Existed_Raid", 00:09:41.999 "uuid": "3bdd6afe-7dcc-4cba-b712-6deecbc6a9cb", 00:09:41.999 "strip_size_kb": 0, 00:09:41.999 "state": "online", 00:09:41.999 "raid_level": "raid1", 00:09:41.999 "superblock": true, 00:09:41.999 "num_base_bdevs": 2, 00:09:41.999 "num_base_bdevs_discovered": 1, 00:09:41.999 "num_base_bdevs_operational": 1, 00:09:41.999 "base_bdevs_list": [ 00:09:41.999 { 00:09:41.999 "name": null, 00:09:41.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.999 "is_configured": false, 00:09:41.999 "data_offset": 0, 00:09:41.999 "data_size": 63488 00:09:41.999 }, 00:09:41.999 { 00:09:41.999 "name": "BaseBdev2", 00:09:41.999 "uuid": "a77defe7-4c64-4962-be1f-a164319545da", 00:09:41.999 "is_configured": true, 00:09:41.999 "data_offset": 2048, 00:09:41.999 "data_size": 63488 00:09:41.999 } 00:09:41.999 ] 00:09:41.999 }' 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.999 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.568 [2024-12-06 04:00:35.757764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:42.568 [2024-12-06 04:00:35.757981] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.568 [2024-12-06 04:00:35.866814] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.568 [2024-12-06 04:00:35.866973] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.568 [2024-12-06 04:00:35.867066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63033 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63033 ']' 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63033 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:42.568 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.827 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63033 00:09:42.827 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.827 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.827 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63033' 00:09:42.827 killing process with pid 63033 00:09:42.827 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63033 00:09:42.827 [2024-12-06 04:00:35.960277] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:42.827 04:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63033 00:09:42.827 [2024-12-06 04:00:35.979944] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.207 04:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:44.207 00:09:44.207 real 0m5.246s 00:09:44.207 user 0m7.515s 00:09:44.207 sys 0m0.843s 00:09:44.207 04:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.207 04:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.207 ************************************ 00:09:44.207 END TEST raid_state_function_test_sb 00:09:44.207 ************************************ 00:09:44.207 04:00:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:09:44.207 04:00:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:44.207 04:00:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.207 04:00:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.207 ************************************ 00:09:44.207 START TEST raid_superblock_test 00:09:44.207 ************************************ 00:09:44.207 04:00:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:09:44.207 04:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:44.207 04:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:44.207 04:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:44.207 04:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:44.207 04:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:44.207 04:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:44.208 04:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:44.208 04:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:44.208 04:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:44.208 04:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:44.208 04:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:44.208 04:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:44.208 04:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:44.208 04:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:44.208 04:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:44.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.208 04:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63285 00:09:44.208 04:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:44.208 04:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63285 00:09:44.208 04:00:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63285 ']' 00:09:44.208 04:00:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.208 04:00:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.208 04:00:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.208 04:00:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.208 04:00:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.208 [2024-12-06 04:00:37.396491] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:09:44.208 [2024-12-06 04:00:37.396695] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63285 ] 00:09:44.467 [2024-12-06 04:00:37.563459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.467 [2024-12-06 04:00:37.691983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.727 [2024-12-06 04:00:37.917615] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.727 [2024-12-06 04:00:37.917780] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.298 malloc1 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.298 [2024-12-06 04:00:38.401248] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:45.298 [2024-12-06 04:00:38.401399] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.298 [2024-12-06 04:00:38.401430] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:45.298 [2024-12-06 04:00:38.401442] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.298 [2024-12-06 04:00:38.403841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.298 [2024-12-06 04:00:38.403888] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:45.298 pt1 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.298 malloc2 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.298 [2024-12-06 04:00:38.458675] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:45.298 [2024-12-06 04:00:38.458798] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.298 [2024-12-06 04:00:38.458849] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:45.298 [2024-12-06 04:00:38.458887] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.298 [2024-12-06 04:00:38.461373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.298 [2024-12-06 04:00:38.461452] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:45.298 pt2 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.298 [2024-12-06 04:00:38.470707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:45.298 [2024-12-06 04:00:38.472759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:45.298 [2024-12-06 04:00:38.473005] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:45.298 [2024-12-06 04:00:38.473070] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:45.298 [2024-12-06 04:00:38.473393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:45.298 [2024-12-06 04:00:38.473610] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:45.298 [2024-12-06 04:00:38.473660] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:45.298 [2024-12-06 04:00:38.473889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.298 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.299 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.299 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.299 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.299 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.299 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.299 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.299 "name": "raid_bdev1", 00:09:45.299 "uuid": "31c730a3-e528-4aac-8339-02e4b39d4e0d", 00:09:45.299 "strip_size_kb": 0, 00:09:45.299 "state": "online", 00:09:45.299 "raid_level": "raid1", 00:09:45.299 "superblock": true, 00:09:45.299 "num_base_bdevs": 2, 00:09:45.299 "num_base_bdevs_discovered": 2, 00:09:45.299 "num_base_bdevs_operational": 2, 00:09:45.299 "base_bdevs_list": [ 00:09:45.299 { 00:09:45.299 "name": "pt1", 00:09:45.299 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.299 "is_configured": true, 00:09:45.299 "data_offset": 2048, 00:09:45.299 "data_size": 63488 00:09:45.299 }, 00:09:45.299 { 00:09:45.299 "name": "pt2", 00:09:45.299 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.299 "is_configured": true, 00:09:45.299 "data_offset": 2048, 00:09:45.299 "data_size": 63488 00:09:45.299 } 00:09:45.299 ] 00:09:45.299 }' 00:09:45.299 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.299 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.868 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:45.868 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:45.868 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:45.868 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:45.868 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:45.868 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:45.868 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:45.868 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:45.868 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.868 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.868 [2024-12-06 04:00:38.926192] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.868 04:00:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.868 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:45.868 "name": "raid_bdev1", 00:09:45.868 "aliases": [ 00:09:45.868 "31c730a3-e528-4aac-8339-02e4b39d4e0d" 00:09:45.868 ], 00:09:45.868 "product_name": "Raid Volume", 00:09:45.868 "block_size": 512, 00:09:45.868 "num_blocks": 63488, 00:09:45.868 "uuid": "31c730a3-e528-4aac-8339-02e4b39d4e0d", 00:09:45.868 "assigned_rate_limits": { 00:09:45.868 "rw_ios_per_sec": 0, 00:09:45.868 "rw_mbytes_per_sec": 0, 00:09:45.868 "r_mbytes_per_sec": 0, 00:09:45.868 "w_mbytes_per_sec": 0 00:09:45.868 }, 00:09:45.868 "claimed": false, 00:09:45.868 "zoned": false, 00:09:45.868 "supported_io_types": { 00:09:45.868 "read": true, 00:09:45.868 "write": true, 00:09:45.868 "unmap": false, 00:09:45.868 "flush": false, 00:09:45.868 "reset": true, 00:09:45.868 "nvme_admin": false, 00:09:45.868 "nvme_io": false, 00:09:45.868 "nvme_io_md": false, 00:09:45.868 "write_zeroes": true, 00:09:45.868 "zcopy": false, 00:09:45.868 "get_zone_info": false, 00:09:45.868 "zone_management": false, 00:09:45.868 "zone_append": false, 00:09:45.868 "compare": false, 00:09:45.868 "compare_and_write": false, 00:09:45.868 "abort": false, 00:09:45.868 "seek_hole": false, 00:09:45.868 "seek_data": false, 00:09:45.868 "copy": false, 00:09:45.868 "nvme_iov_md": false 00:09:45.868 }, 00:09:45.868 "memory_domains": [ 00:09:45.868 { 00:09:45.868 "dma_device_id": "system", 00:09:45.868 "dma_device_type": 1 00:09:45.868 }, 00:09:45.868 { 00:09:45.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.868 "dma_device_type": 2 00:09:45.868 }, 00:09:45.868 { 00:09:45.868 "dma_device_id": "system", 00:09:45.868 "dma_device_type": 1 00:09:45.868 }, 00:09:45.868 { 00:09:45.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.868 "dma_device_type": 2 00:09:45.868 } 00:09:45.868 ], 00:09:45.868 "driver_specific": { 00:09:45.868 "raid": { 00:09:45.868 "uuid": "31c730a3-e528-4aac-8339-02e4b39d4e0d", 00:09:45.868 "strip_size_kb": 0, 00:09:45.868 "state": "online", 00:09:45.868 "raid_level": "raid1", 00:09:45.868 "superblock": true, 00:09:45.868 "num_base_bdevs": 2, 00:09:45.868 "num_base_bdevs_discovered": 2, 00:09:45.868 "num_base_bdevs_operational": 2, 00:09:45.868 "base_bdevs_list": [ 00:09:45.868 { 00:09:45.868 "name": "pt1", 00:09:45.868 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.868 "is_configured": true, 00:09:45.868 "data_offset": 2048, 00:09:45.868 "data_size": 63488 00:09:45.868 }, 00:09:45.868 { 00:09:45.868 "name": "pt2", 00:09:45.868 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.868 "is_configured": true, 00:09:45.868 "data_offset": 2048, 00:09:45.868 "data_size": 63488 00:09:45.868 } 00:09:45.868 ] 00:09:45.868 } 00:09:45.868 } 00:09:45.868 }' 00:09:45.868 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:45.868 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:45.868 pt2' 00:09:45.868 04:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.868 [2024-12-06 04:00:39.157814] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=31c730a3-e528-4aac-8339-02e4b39d4e0d 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 31c730a3-e528-4aac-8339-02e4b39d4e0d ']' 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.868 [2024-12-06 04:00:39.185434] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.868 [2024-12-06 04:00:39.185518] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.868 [2024-12-06 04:00:39.185640] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.868 [2024-12-06 04:00:39.185714] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.868 [2024-12-06 04:00:39.185728] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.868 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.128 [2024-12-06 04:00:39.321259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:46.128 [2024-12-06 04:00:39.323384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:46.128 [2024-12-06 04:00:39.323458] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:46.128 [2024-12-06 04:00:39.323521] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:46.128 [2024-12-06 04:00:39.323538] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:46.128 [2024-12-06 04:00:39.323550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:46.128 request: 00:09:46.128 { 00:09:46.128 "name": "raid_bdev1", 00:09:46.128 "raid_level": "raid1", 00:09:46.128 "base_bdevs": [ 00:09:46.128 "malloc1", 00:09:46.128 "malloc2" 00:09:46.128 ], 00:09:46.128 "superblock": false, 00:09:46.128 "method": "bdev_raid_create", 00:09:46.128 "req_id": 1 00:09:46.128 } 00:09:46.128 Got JSON-RPC error response 00:09:46.128 response: 00:09:46.128 { 00:09:46.128 "code": -17, 00:09:46.128 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:46.128 } 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.128 [2024-12-06 04:00:39.377188] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:46.128 [2024-12-06 04:00:39.377324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.128 [2024-12-06 04:00:39.377392] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:46.128 [2024-12-06 04:00:39.377434] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.128 [2024-12-06 04:00:39.379916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.128 [2024-12-06 04:00:39.379998] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:46.128 [2024-12-06 04:00:39.380165] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:46.128 [2024-12-06 04:00:39.380278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:46.128 pt1 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.128 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.128 "name": "raid_bdev1", 00:09:46.128 "uuid": "31c730a3-e528-4aac-8339-02e4b39d4e0d", 00:09:46.128 "strip_size_kb": 0, 00:09:46.128 "state": "configuring", 00:09:46.128 "raid_level": "raid1", 00:09:46.128 "superblock": true, 00:09:46.128 "num_base_bdevs": 2, 00:09:46.128 "num_base_bdevs_discovered": 1, 00:09:46.128 "num_base_bdevs_operational": 2, 00:09:46.128 "base_bdevs_list": [ 00:09:46.128 { 00:09:46.128 "name": "pt1", 00:09:46.128 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.128 "is_configured": true, 00:09:46.128 "data_offset": 2048, 00:09:46.128 "data_size": 63488 00:09:46.128 }, 00:09:46.128 { 00:09:46.128 "name": null, 00:09:46.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.129 "is_configured": false, 00:09:46.129 "data_offset": 2048, 00:09:46.129 "data_size": 63488 00:09:46.129 } 00:09:46.129 ] 00:09:46.129 }' 00:09:46.129 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.129 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.715 [2024-12-06 04:00:39.812446] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:46.715 [2024-12-06 04:00:39.812610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.715 [2024-12-06 04:00:39.812642] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:46.715 [2024-12-06 04:00:39.812655] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.715 [2024-12-06 04:00:39.813198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.715 [2024-12-06 04:00:39.813224] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:46.715 [2024-12-06 04:00:39.813319] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:46.715 [2024-12-06 04:00:39.813352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:46.715 [2024-12-06 04:00:39.813497] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:46.715 [2024-12-06 04:00:39.813516] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:46.715 [2024-12-06 04:00:39.813796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:46.715 [2024-12-06 04:00:39.813979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:46.715 [2024-12-06 04:00:39.813989] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:46.715 [2024-12-06 04:00:39.814175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.715 pt2 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.715 "name": "raid_bdev1", 00:09:46.715 "uuid": "31c730a3-e528-4aac-8339-02e4b39d4e0d", 00:09:46.715 "strip_size_kb": 0, 00:09:46.715 "state": "online", 00:09:46.715 "raid_level": "raid1", 00:09:46.715 "superblock": true, 00:09:46.715 "num_base_bdevs": 2, 00:09:46.715 "num_base_bdevs_discovered": 2, 00:09:46.715 "num_base_bdevs_operational": 2, 00:09:46.715 "base_bdevs_list": [ 00:09:46.715 { 00:09:46.715 "name": "pt1", 00:09:46.715 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.715 "is_configured": true, 00:09:46.715 "data_offset": 2048, 00:09:46.715 "data_size": 63488 00:09:46.715 }, 00:09:46.715 { 00:09:46.715 "name": "pt2", 00:09:46.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.715 "is_configured": true, 00:09:46.715 "data_offset": 2048, 00:09:46.715 "data_size": 63488 00:09:46.715 } 00:09:46.715 ] 00:09:46.715 }' 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.715 04:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.974 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:46.974 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:46.974 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:46.974 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:46.974 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:46.974 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:46.974 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:46.974 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:46.974 04:00:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.974 04:00:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.974 [2024-12-06 04:00:40.304004] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.974 04:00:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.233 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:47.233 "name": "raid_bdev1", 00:09:47.233 "aliases": [ 00:09:47.233 "31c730a3-e528-4aac-8339-02e4b39d4e0d" 00:09:47.233 ], 00:09:47.233 "product_name": "Raid Volume", 00:09:47.233 "block_size": 512, 00:09:47.233 "num_blocks": 63488, 00:09:47.233 "uuid": "31c730a3-e528-4aac-8339-02e4b39d4e0d", 00:09:47.233 "assigned_rate_limits": { 00:09:47.233 "rw_ios_per_sec": 0, 00:09:47.233 "rw_mbytes_per_sec": 0, 00:09:47.233 "r_mbytes_per_sec": 0, 00:09:47.233 "w_mbytes_per_sec": 0 00:09:47.233 }, 00:09:47.233 "claimed": false, 00:09:47.233 "zoned": false, 00:09:47.233 "supported_io_types": { 00:09:47.233 "read": true, 00:09:47.233 "write": true, 00:09:47.233 "unmap": false, 00:09:47.233 "flush": false, 00:09:47.233 "reset": true, 00:09:47.233 "nvme_admin": false, 00:09:47.233 "nvme_io": false, 00:09:47.233 "nvme_io_md": false, 00:09:47.233 "write_zeroes": true, 00:09:47.233 "zcopy": false, 00:09:47.233 "get_zone_info": false, 00:09:47.233 "zone_management": false, 00:09:47.233 "zone_append": false, 00:09:47.233 "compare": false, 00:09:47.233 "compare_and_write": false, 00:09:47.233 "abort": false, 00:09:47.233 "seek_hole": false, 00:09:47.233 "seek_data": false, 00:09:47.233 "copy": false, 00:09:47.233 "nvme_iov_md": false 00:09:47.233 }, 00:09:47.233 "memory_domains": [ 00:09:47.234 { 00:09:47.234 "dma_device_id": "system", 00:09:47.234 "dma_device_type": 1 00:09:47.234 }, 00:09:47.234 { 00:09:47.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.234 "dma_device_type": 2 00:09:47.234 }, 00:09:47.234 { 00:09:47.234 "dma_device_id": "system", 00:09:47.234 "dma_device_type": 1 00:09:47.234 }, 00:09:47.234 { 00:09:47.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.234 "dma_device_type": 2 00:09:47.234 } 00:09:47.234 ], 00:09:47.234 "driver_specific": { 00:09:47.234 "raid": { 00:09:47.234 "uuid": "31c730a3-e528-4aac-8339-02e4b39d4e0d", 00:09:47.234 "strip_size_kb": 0, 00:09:47.234 "state": "online", 00:09:47.234 "raid_level": "raid1", 00:09:47.234 "superblock": true, 00:09:47.234 "num_base_bdevs": 2, 00:09:47.234 "num_base_bdevs_discovered": 2, 00:09:47.234 "num_base_bdevs_operational": 2, 00:09:47.234 "base_bdevs_list": [ 00:09:47.234 { 00:09:47.234 "name": "pt1", 00:09:47.234 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.234 "is_configured": true, 00:09:47.234 "data_offset": 2048, 00:09:47.234 "data_size": 63488 00:09:47.234 }, 00:09:47.234 { 00:09:47.234 "name": "pt2", 00:09:47.234 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.234 "is_configured": true, 00:09:47.234 "data_offset": 2048, 00:09:47.234 "data_size": 63488 00:09:47.234 } 00:09:47.234 ] 00:09:47.234 } 00:09:47.234 } 00:09:47.234 }' 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:47.234 pt2' 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.234 [2024-12-06 04:00:40.527592] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 31c730a3-e528-4aac-8339-02e4b39d4e0d '!=' 31c730a3-e528-4aac-8339-02e4b39d4e0d ']' 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.234 [2024-12-06 04:00:40.571289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.234 04:00:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.493 04:00:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.493 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.493 "name": "raid_bdev1", 00:09:47.493 "uuid": "31c730a3-e528-4aac-8339-02e4b39d4e0d", 00:09:47.493 "strip_size_kb": 0, 00:09:47.493 "state": "online", 00:09:47.493 "raid_level": "raid1", 00:09:47.493 "superblock": true, 00:09:47.493 "num_base_bdevs": 2, 00:09:47.493 "num_base_bdevs_discovered": 1, 00:09:47.493 "num_base_bdevs_operational": 1, 00:09:47.493 "base_bdevs_list": [ 00:09:47.493 { 00:09:47.493 "name": null, 00:09:47.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.493 "is_configured": false, 00:09:47.493 "data_offset": 0, 00:09:47.493 "data_size": 63488 00:09:47.493 }, 00:09:47.493 { 00:09:47.493 "name": "pt2", 00:09:47.493 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.494 "is_configured": true, 00:09:47.494 "data_offset": 2048, 00:09:47.494 "data_size": 63488 00:09:47.494 } 00:09:47.494 ] 00:09:47.494 }' 00:09:47.494 04:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.494 04:00:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.752 [2024-12-06 04:00:41.034470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:47.752 [2024-12-06 04:00:41.034521] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.752 [2024-12-06 04:00:41.034611] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.752 [2024-12-06 04:00:41.034668] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.752 [2024-12-06 04:00:41.034682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.752 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.753 [2024-12-06 04:00:41.102339] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:47.753 [2024-12-06 04:00:41.102456] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.753 [2024-12-06 04:00:41.102510] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:47.753 [2024-12-06 04:00:41.102547] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.753 [2024-12-06 04:00:41.105030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.753 [2024-12-06 04:00:41.105137] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:47.753 [2024-12-06 04:00:41.105264] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:47.753 [2024-12-06 04:00:41.105371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:47.753 [2024-12-06 04:00:41.105531] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:47.753 [2024-12-06 04:00:41.105575] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:47.753 [2024-12-06 04:00:41.105859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:48.011 [2024-12-06 04:00:41.106070] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:48.011 [2024-12-06 04:00:41.106116] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:48.011 [2024-12-06 04:00:41.106365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.011 pt2 00:09:48.011 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.011 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:48.011 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.011 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.011 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.011 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.011 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:48.011 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.011 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.011 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.011 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.011 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.011 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.011 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.011 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.011 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.011 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.011 "name": "raid_bdev1", 00:09:48.011 "uuid": "31c730a3-e528-4aac-8339-02e4b39d4e0d", 00:09:48.011 "strip_size_kb": 0, 00:09:48.011 "state": "online", 00:09:48.011 "raid_level": "raid1", 00:09:48.011 "superblock": true, 00:09:48.011 "num_base_bdevs": 2, 00:09:48.011 "num_base_bdevs_discovered": 1, 00:09:48.011 "num_base_bdevs_operational": 1, 00:09:48.011 "base_bdevs_list": [ 00:09:48.011 { 00:09:48.011 "name": null, 00:09:48.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.011 "is_configured": false, 00:09:48.011 "data_offset": 2048, 00:09:48.011 "data_size": 63488 00:09:48.011 }, 00:09:48.011 { 00:09:48.011 "name": "pt2", 00:09:48.011 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.011 "is_configured": true, 00:09:48.011 "data_offset": 2048, 00:09:48.011 "data_size": 63488 00:09:48.011 } 00:09:48.011 ] 00:09:48.011 }' 00:09:48.011 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.011 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.270 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:48.270 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.270 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.270 [2024-12-06 04:00:41.549685] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.270 [2024-12-06 04:00:41.549782] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.270 [2024-12-06 04:00:41.549914] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.270 [2024-12-06 04:00:41.549981] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.270 [2024-12-06 04:00:41.549992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:48.270 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.270 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:48.270 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.270 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.270 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.270 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.270 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:48.270 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:48.270 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:09:48.270 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:48.270 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.270 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.270 [2024-12-06 04:00:41.605617] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:48.270 [2024-12-06 04:00:41.605741] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.271 [2024-12-06 04:00:41.605784] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:48.271 [2024-12-06 04:00:41.605821] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.271 [2024-12-06 04:00:41.608300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.271 [2024-12-06 04:00:41.608388] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:48.271 [2024-12-06 04:00:41.608519] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:48.271 [2024-12-06 04:00:41.608611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:48.271 [2024-12-06 04:00:41.608810] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:48.271 [2024-12-06 04:00:41.608872] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.271 [2024-12-06 04:00:41.608921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:48.271 [2024-12-06 04:00:41.609035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:48.271 [2024-12-06 04:00:41.609172] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:48.271 [2024-12-06 04:00:41.609211] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:48.271 [2024-12-06 04:00:41.609516] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:48.271 [2024-12-06 04:00:41.609707] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:48.271 [2024-12-06 04:00:41.609756] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:48.271 [2024-12-06 04:00:41.609999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.271 pt1 00:09:48.271 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.271 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:09:48.271 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:48.271 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.271 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.271 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.271 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.271 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:48.271 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.271 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.271 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.271 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.271 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.271 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.271 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.271 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.530 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.530 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.530 "name": "raid_bdev1", 00:09:48.530 "uuid": "31c730a3-e528-4aac-8339-02e4b39d4e0d", 00:09:48.530 "strip_size_kb": 0, 00:09:48.530 "state": "online", 00:09:48.530 "raid_level": "raid1", 00:09:48.531 "superblock": true, 00:09:48.531 "num_base_bdevs": 2, 00:09:48.531 "num_base_bdevs_discovered": 1, 00:09:48.531 "num_base_bdevs_operational": 1, 00:09:48.531 "base_bdevs_list": [ 00:09:48.531 { 00:09:48.531 "name": null, 00:09:48.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.531 "is_configured": false, 00:09:48.531 "data_offset": 2048, 00:09:48.531 "data_size": 63488 00:09:48.531 }, 00:09:48.531 { 00:09:48.531 "name": "pt2", 00:09:48.531 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.531 "is_configured": true, 00:09:48.531 "data_offset": 2048, 00:09:48.531 "data_size": 63488 00:09:48.531 } 00:09:48.531 ] 00:09:48.531 }' 00:09:48.531 04:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.531 04:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.790 04:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:48.790 04:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:48.790 04:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.790 04:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.790 04:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.790 04:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:48.790 04:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:48.790 04:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:48.790 04:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.790 04:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.790 [2024-12-06 04:00:42.125403] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.790 04:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.050 04:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 31c730a3-e528-4aac-8339-02e4b39d4e0d '!=' 31c730a3-e528-4aac-8339-02e4b39d4e0d ']' 00:09:49.050 04:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63285 00:09:49.050 04:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63285 ']' 00:09:49.050 04:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63285 00:09:49.050 04:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:49.050 04:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:49.050 04:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63285 00:09:49.050 04:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:49.050 killing process with pid 63285 00:09:49.050 04:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:49.050 04:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63285' 00:09:49.050 04:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63285 00:09:49.050 [2024-12-06 04:00:42.201657] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:49.050 [2024-12-06 04:00:42.201762] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.050 [2024-12-06 04:00:42.201817] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.050 [2024-12-06 04:00:42.201833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:49.050 04:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63285 00:09:49.309 [2024-12-06 04:00:42.443813] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:50.687 ************************************ 00:09:50.687 END TEST raid_superblock_test 00:09:50.687 ************************************ 00:09:50.687 04:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:50.687 00:09:50.687 real 0m6.425s 00:09:50.687 user 0m9.713s 00:09:50.687 sys 0m1.072s 00:09:50.687 04:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.687 04:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.687 04:00:43 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:09:50.687 04:00:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:50.687 04:00:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.687 04:00:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:50.687 ************************************ 00:09:50.687 START TEST raid_read_error_test 00:09:50.687 ************************************ 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0LmEXT1i27 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63621 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63621 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63621 ']' 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.687 04:00:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.687 [2024-12-06 04:00:43.863901] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:09:50.687 [2024-12-06 04:00:43.864038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63621 ] 00:09:50.946 [2024-12-06 04:00:44.052570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.946 [2024-12-06 04:00:44.178624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.205 [2024-12-06 04:00:44.395950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.205 [2024-12-06 04:00:44.396022] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.465 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.465 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:51.465 04:00:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:51.465 04:00:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:51.465 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.465 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.725 BaseBdev1_malloc 00:09:51.725 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.725 04:00:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:51.725 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.726 true 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.726 [2024-12-06 04:00:44.870557] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:51.726 [2024-12-06 04:00:44.870655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.726 [2024-12-06 04:00:44.870696] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:51.726 [2024-12-06 04:00:44.870709] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.726 [2024-12-06 04:00:44.873031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.726 [2024-12-06 04:00:44.873085] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:51.726 BaseBdev1 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.726 BaseBdev2_malloc 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.726 true 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.726 [2024-12-06 04:00:44.940272] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:51.726 [2024-12-06 04:00:44.940342] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.726 [2024-12-06 04:00:44.940366] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:51.726 [2024-12-06 04:00:44.940377] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.726 [2024-12-06 04:00:44.942856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.726 [2024-12-06 04:00:44.942904] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:51.726 BaseBdev2 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.726 [2024-12-06 04:00:44.952361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.726 [2024-12-06 04:00:44.954501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:51.726 [2024-12-06 04:00:44.954725] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:51.726 [2024-12-06 04:00:44.954743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:51.726 [2024-12-06 04:00:44.955030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:51.726 [2024-12-06 04:00:44.955248] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:51.726 [2024-12-06 04:00:44.955261] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:51.726 [2024-12-06 04:00:44.955464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.726 04:00:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.726 04:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.726 "name": "raid_bdev1", 00:09:51.726 "uuid": "43446816-1c03-45aa-be2d-cec475c5556e", 00:09:51.726 "strip_size_kb": 0, 00:09:51.726 "state": "online", 00:09:51.726 "raid_level": "raid1", 00:09:51.726 "superblock": true, 00:09:51.726 "num_base_bdevs": 2, 00:09:51.726 "num_base_bdevs_discovered": 2, 00:09:51.726 "num_base_bdevs_operational": 2, 00:09:51.726 "base_bdevs_list": [ 00:09:51.726 { 00:09:51.726 "name": "BaseBdev1", 00:09:51.726 "uuid": "eae4c57d-09d5-56df-bd67-b5d9b45d23d5", 00:09:51.726 "is_configured": true, 00:09:51.726 "data_offset": 2048, 00:09:51.726 "data_size": 63488 00:09:51.726 }, 00:09:51.726 { 00:09:51.726 "name": "BaseBdev2", 00:09:51.726 "uuid": "8995743b-c0df-5149-86e9-b9903f1ac664", 00:09:51.726 "is_configured": true, 00:09:51.726 "data_offset": 2048, 00:09:51.726 "data_size": 63488 00:09:51.726 } 00:09:51.726 ] 00:09:51.726 }' 00:09:51.726 04:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.726 04:00:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.038 04:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:52.038 04:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:52.297 [2024-12-06 04:00:45.433059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.234 "name": "raid_bdev1", 00:09:53.234 "uuid": "43446816-1c03-45aa-be2d-cec475c5556e", 00:09:53.234 "strip_size_kb": 0, 00:09:53.234 "state": "online", 00:09:53.234 "raid_level": "raid1", 00:09:53.234 "superblock": true, 00:09:53.234 "num_base_bdevs": 2, 00:09:53.234 "num_base_bdevs_discovered": 2, 00:09:53.234 "num_base_bdevs_operational": 2, 00:09:53.234 "base_bdevs_list": [ 00:09:53.234 { 00:09:53.234 "name": "BaseBdev1", 00:09:53.234 "uuid": "eae4c57d-09d5-56df-bd67-b5d9b45d23d5", 00:09:53.234 "is_configured": true, 00:09:53.234 "data_offset": 2048, 00:09:53.234 "data_size": 63488 00:09:53.234 }, 00:09:53.234 { 00:09:53.234 "name": "BaseBdev2", 00:09:53.234 "uuid": "8995743b-c0df-5149-86e9-b9903f1ac664", 00:09:53.234 "is_configured": true, 00:09:53.234 "data_offset": 2048, 00:09:53.234 "data_size": 63488 00:09:53.234 } 00:09:53.234 ] 00:09:53.234 }' 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.234 04:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.803 04:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:53.803 04:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.803 04:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.803 [2024-12-06 04:00:46.875522] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:53.803 [2024-12-06 04:00:46.875618] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.803 [2024-12-06 04:00:46.878366] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.803 [2024-12-06 04:00:46.878454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.803 [2024-12-06 04:00:46.878572] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.803 [2024-12-06 04:00:46.878629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:53.803 04:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.803 04:00:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63621 00:09:53.803 04:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63621 ']' 00:09:53.803 04:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63621 00:09:53.803 { 00:09:53.803 "results": [ 00:09:53.803 { 00:09:53.803 "job": "raid_bdev1", 00:09:53.803 "core_mask": "0x1", 00:09:53.803 "workload": "randrw", 00:09:53.803 "percentage": 50, 00:09:53.803 "status": "finished", 00:09:53.803 "queue_depth": 1, 00:09:53.803 "io_size": 131072, 00:09:53.803 "runtime": 1.443432, 00:09:53.803 "iops": 17531.826923609842, 00:09:53.803 "mibps": 2191.4783654512303, 00:09:53.803 "io_failed": 0, 00:09:53.803 "io_timeout": 0, 00:09:53.803 "avg_latency_us": 54.268309671282886, 00:09:53.803 "min_latency_us": 23.923144104803495, 00:09:53.803 "max_latency_us": 1423.7624454148472 00:09:53.803 } 00:09:53.803 ], 00:09:53.803 "core_count": 1 00:09:53.803 } 00:09:53.803 04:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:53.803 04:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.803 04:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63621 00:09:53.803 04:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.803 04:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.803 04:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63621' 00:09:53.803 killing process with pid 63621 00:09:53.803 04:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63621 00:09:53.803 [2024-12-06 04:00:46.920950] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:53.803 04:00:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63621 00:09:53.803 [2024-12-06 04:00:47.054625] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:55.180 04:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0LmEXT1i27 00:09:55.180 04:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:55.180 04:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:55.180 04:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:55.180 04:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:55.180 04:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:55.180 04:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:55.180 ************************************ 00:09:55.180 END TEST raid_read_error_test 00:09:55.180 ************************************ 00:09:55.180 04:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:55.180 00:09:55.180 real 0m4.482s 00:09:55.180 user 0m5.471s 00:09:55.180 sys 0m0.558s 00:09:55.180 04:00:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.180 04:00:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.180 04:00:48 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:09:55.180 04:00:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:55.180 04:00:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.180 04:00:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:55.180 ************************************ 00:09:55.180 START TEST raid_write_error_test 00:09:55.180 ************************************ 00:09:55.180 04:00:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:09:55.180 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:55.180 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:55.180 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CPfPRdlmgZ 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63761 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63761 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63761 ']' 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.181 04:00:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.181 [2024-12-06 04:00:48.411723] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:09:55.181 [2024-12-06 04:00:48.411829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63761 ] 00:09:55.440 [2024-12-06 04:00:48.586937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.440 [2024-12-06 04:00:48.702178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.699 [2024-12-06 04:00:48.899373] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.699 [2024-12-06 04:00:48.899430] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.959 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.959 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:55.959 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.959 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:55.959 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.959 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.959 BaseBdev1_malloc 00:09:55.959 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.959 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:55.959 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.959 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.959 true 00:09:55.959 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.959 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:55.959 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.959 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.959 [2024-12-06 04:00:49.307803] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:55.959 [2024-12-06 04:00:49.307856] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.959 [2024-12-06 04:00:49.307875] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:55.959 [2024-12-06 04:00:49.307885] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.959 [2024-12-06 04:00:49.310096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.959 [2024-12-06 04:00:49.310131] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:56.220 BaseBdev1 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.220 BaseBdev2_malloc 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.220 true 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.220 [2024-12-06 04:00:49.374243] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:56.220 [2024-12-06 04:00:49.374298] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.220 [2024-12-06 04:00:49.374314] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:56.220 [2024-12-06 04:00:49.374325] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.220 [2024-12-06 04:00:49.376397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.220 [2024-12-06 04:00:49.376536] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:56.220 BaseBdev2 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.220 [2024-12-06 04:00:49.386280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.220 [2024-12-06 04:00:49.388106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.220 [2024-12-06 04:00:49.388319] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:56.220 [2024-12-06 04:00:49.388335] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:56.220 [2024-12-06 04:00:49.388577] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:56.220 [2024-12-06 04:00:49.388771] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:56.220 [2024-12-06 04:00:49.388781] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:56.220 [2024-12-06 04:00:49.388921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.220 "name": "raid_bdev1", 00:09:56.220 "uuid": "5e74423c-2ed5-4071-b86f-4f38f8044184", 00:09:56.220 "strip_size_kb": 0, 00:09:56.220 "state": "online", 00:09:56.220 "raid_level": "raid1", 00:09:56.220 "superblock": true, 00:09:56.220 "num_base_bdevs": 2, 00:09:56.220 "num_base_bdevs_discovered": 2, 00:09:56.220 "num_base_bdevs_operational": 2, 00:09:56.220 "base_bdevs_list": [ 00:09:56.220 { 00:09:56.220 "name": "BaseBdev1", 00:09:56.220 "uuid": "97000293-9ba4-5824-885c-6cf79aa1685f", 00:09:56.220 "is_configured": true, 00:09:56.220 "data_offset": 2048, 00:09:56.220 "data_size": 63488 00:09:56.220 }, 00:09:56.220 { 00:09:56.220 "name": "BaseBdev2", 00:09:56.220 "uuid": "f71a9a2b-a3fa-5295-9c56-ecbe13fc2812", 00:09:56.220 "is_configured": true, 00:09:56.220 "data_offset": 2048, 00:09:56.220 "data_size": 63488 00:09:56.220 } 00:09:56.220 ] 00:09:56.220 }' 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.220 04:00:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.480 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:56.480 04:00:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:56.739 [2024-12-06 04:00:49.878602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.736 [2024-12-06 04:00:50.819410] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:57.736 [2024-12-06 04:00:50.819552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:57.736 [2024-12-06 04:00:50.819800] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.736 "name": "raid_bdev1", 00:09:57.736 "uuid": "5e74423c-2ed5-4071-b86f-4f38f8044184", 00:09:57.736 "strip_size_kb": 0, 00:09:57.736 "state": "online", 00:09:57.736 "raid_level": "raid1", 00:09:57.736 "superblock": true, 00:09:57.736 "num_base_bdevs": 2, 00:09:57.736 "num_base_bdevs_discovered": 1, 00:09:57.736 "num_base_bdevs_operational": 1, 00:09:57.736 "base_bdevs_list": [ 00:09:57.736 { 00:09:57.736 "name": null, 00:09:57.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.736 "is_configured": false, 00:09:57.736 "data_offset": 0, 00:09:57.736 "data_size": 63488 00:09:57.736 }, 00:09:57.736 { 00:09:57.736 "name": "BaseBdev2", 00:09:57.736 "uuid": "f71a9a2b-a3fa-5295-9c56-ecbe13fc2812", 00:09:57.736 "is_configured": true, 00:09:57.736 "data_offset": 2048, 00:09:57.736 "data_size": 63488 00:09:57.736 } 00:09:57.736 ] 00:09:57.736 }' 00:09:57.736 04:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.737 04:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.010 04:00:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:58.010 04:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.010 04:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.010 [2024-12-06 04:00:51.273307] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.010 [2024-12-06 04:00:51.273344] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.010 [2024-12-06 04:00:51.276321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.010 { 00:09:58.010 "results": [ 00:09:58.010 { 00:09:58.010 "job": "raid_bdev1", 00:09:58.010 "core_mask": "0x1", 00:09:58.010 "workload": "randrw", 00:09:58.010 "percentage": 50, 00:09:58.010 "status": "finished", 00:09:58.010 "queue_depth": 1, 00:09:58.010 "io_size": 131072, 00:09:58.010 "runtime": 1.395678, 00:09:58.010 "iops": 20921.014732624575, 00:09:58.010 "mibps": 2615.126841578072, 00:09:58.010 "io_failed": 0, 00:09:58.010 "io_timeout": 0, 00:09:58.010 "avg_latency_us": 45.07798954052832, 00:09:58.010 "min_latency_us": 22.358078602620086, 00:09:58.010 "max_latency_us": 1373.6803493449781 00:09:58.010 } 00:09:58.010 ], 00:09:58.010 "core_count": 1 00:09:58.010 } 00:09:58.010 [2024-12-06 04:00:51.276422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.010 [2024-12-06 04:00:51.276495] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.010 [2024-12-06 04:00:51.276509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:58.010 04:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.010 04:00:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63761 00:09:58.010 04:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63761 ']' 00:09:58.010 04:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63761 00:09:58.010 04:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:58.010 04:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.010 04:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63761 00:09:58.010 04:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.010 04:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.010 04:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63761' 00:09:58.010 killing process with pid 63761 00:09:58.010 04:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63761 00:09:58.010 04:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63761 00:09:58.010 [2024-12-06 04:00:51.321572] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:58.271 [2024-12-06 04:00:51.458074] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:59.655 04:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CPfPRdlmgZ 00:09:59.655 04:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:59.655 04:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:59.655 04:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:59.655 04:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:59.655 04:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:59.655 04:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:59.655 04:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:59.655 00:09:59.655 real 0m4.384s 00:09:59.655 user 0m5.206s 00:09:59.655 sys 0m0.535s 00:09:59.655 04:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.655 ************************************ 00:09:59.655 END TEST raid_write_error_test 00:09:59.655 ************************************ 00:09:59.655 04:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.655 04:00:52 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:59.655 04:00:52 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:59.655 04:00:52 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:59.655 04:00:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:59.655 04:00:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.655 04:00:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:59.655 ************************************ 00:09:59.655 START TEST raid_state_function_test 00:09:59.655 ************************************ 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63910 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63910' 00:09:59.655 Process raid pid: 63910 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63910 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63910 ']' 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.655 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.655 [2024-12-06 04:00:52.861623] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:09:59.655 [2024-12-06 04:00:52.861862] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.915 [2024-12-06 04:00:53.024577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.915 [2024-12-06 04:00:53.140559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.175 [2024-12-06 04:00:53.353926] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.175 [2024-12-06 04:00:53.354089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.436 [2024-12-06 04:00:53.716614] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:00.436 [2024-12-06 04:00:53.716684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:00.436 [2024-12-06 04:00:53.716696] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:00.436 [2024-12-06 04:00:53.716707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:00.436 [2024-12-06 04:00:53.716714] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:00.436 [2024-12-06 04:00:53.716724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.436 "name": "Existed_Raid", 00:10:00.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.436 "strip_size_kb": 64, 00:10:00.436 "state": "configuring", 00:10:00.436 "raid_level": "raid0", 00:10:00.436 "superblock": false, 00:10:00.436 "num_base_bdevs": 3, 00:10:00.436 "num_base_bdevs_discovered": 0, 00:10:00.436 "num_base_bdevs_operational": 3, 00:10:00.436 "base_bdevs_list": [ 00:10:00.436 { 00:10:00.436 "name": "BaseBdev1", 00:10:00.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.436 "is_configured": false, 00:10:00.436 "data_offset": 0, 00:10:00.436 "data_size": 0 00:10:00.436 }, 00:10:00.436 { 00:10:00.436 "name": "BaseBdev2", 00:10:00.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.436 "is_configured": false, 00:10:00.436 "data_offset": 0, 00:10:00.436 "data_size": 0 00:10:00.436 }, 00:10:00.436 { 00:10:00.436 "name": "BaseBdev3", 00:10:00.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.436 "is_configured": false, 00:10:00.436 "data_offset": 0, 00:10:00.436 "data_size": 0 00:10:00.436 } 00:10:00.436 ] 00:10:00.436 }' 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.436 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.007 [2024-12-06 04:00:54.183748] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:01.007 [2024-12-06 04:00:54.183785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.007 [2024-12-06 04:00:54.195707] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:01.007 [2024-12-06 04:00:54.195752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:01.007 [2024-12-06 04:00:54.195761] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:01.007 [2024-12-06 04:00:54.195770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:01.007 [2024-12-06 04:00:54.195776] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:01.007 [2024-12-06 04:00:54.195785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.007 [2024-12-06 04:00:54.245257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.007 BaseBdev1 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.007 [ 00:10:01.007 { 00:10:01.007 "name": "BaseBdev1", 00:10:01.007 "aliases": [ 00:10:01.007 "343d5f39-cef6-4563-b456-d16245a4babf" 00:10:01.007 ], 00:10:01.007 "product_name": "Malloc disk", 00:10:01.007 "block_size": 512, 00:10:01.007 "num_blocks": 65536, 00:10:01.007 "uuid": "343d5f39-cef6-4563-b456-d16245a4babf", 00:10:01.007 "assigned_rate_limits": { 00:10:01.007 "rw_ios_per_sec": 0, 00:10:01.007 "rw_mbytes_per_sec": 0, 00:10:01.007 "r_mbytes_per_sec": 0, 00:10:01.007 "w_mbytes_per_sec": 0 00:10:01.007 }, 00:10:01.007 "claimed": true, 00:10:01.007 "claim_type": "exclusive_write", 00:10:01.007 "zoned": false, 00:10:01.007 "supported_io_types": { 00:10:01.007 "read": true, 00:10:01.007 "write": true, 00:10:01.007 "unmap": true, 00:10:01.007 "flush": true, 00:10:01.007 "reset": true, 00:10:01.007 "nvme_admin": false, 00:10:01.007 "nvme_io": false, 00:10:01.007 "nvme_io_md": false, 00:10:01.007 "write_zeroes": true, 00:10:01.007 "zcopy": true, 00:10:01.007 "get_zone_info": false, 00:10:01.007 "zone_management": false, 00:10:01.007 "zone_append": false, 00:10:01.007 "compare": false, 00:10:01.007 "compare_and_write": false, 00:10:01.007 "abort": true, 00:10:01.007 "seek_hole": false, 00:10:01.007 "seek_data": false, 00:10:01.007 "copy": true, 00:10:01.007 "nvme_iov_md": false 00:10:01.007 }, 00:10:01.007 "memory_domains": [ 00:10:01.007 { 00:10:01.007 "dma_device_id": "system", 00:10:01.007 "dma_device_type": 1 00:10:01.007 }, 00:10:01.007 { 00:10:01.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.007 "dma_device_type": 2 00:10:01.007 } 00:10:01.007 ], 00:10:01.007 "driver_specific": {} 00:10:01.007 } 00:10:01.007 ] 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.007 "name": "Existed_Raid", 00:10:01.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.007 "strip_size_kb": 64, 00:10:01.007 "state": "configuring", 00:10:01.007 "raid_level": "raid0", 00:10:01.007 "superblock": false, 00:10:01.007 "num_base_bdevs": 3, 00:10:01.007 "num_base_bdevs_discovered": 1, 00:10:01.007 "num_base_bdevs_operational": 3, 00:10:01.007 "base_bdevs_list": [ 00:10:01.007 { 00:10:01.007 "name": "BaseBdev1", 00:10:01.007 "uuid": "343d5f39-cef6-4563-b456-d16245a4babf", 00:10:01.007 "is_configured": true, 00:10:01.007 "data_offset": 0, 00:10:01.007 "data_size": 65536 00:10:01.007 }, 00:10:01.007 { 00:10:01.007 "name": "BaseBdev2", 00:10:01.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.007 "is_configured": false, 00:10:01.007 "data_offset": 0, 00:10:01.007 "data_size": 0 00:10:01.007 }, 00:10:01.007 { 00:10:01.007 "name": "BaseBdev3", 00:10:01.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.007 "is_configured": false, 00:10:01.007 "data_offset": 0, 00:10:01.007 "data_size": 0 00:10:01.007 } 00:10:01.007 ] 00:10:01.007 }' 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.007 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.578 [2024-12-06 04:00:54.716517] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:01.578 [2024-12-06 04:00:54.716573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.578 [2024-12-06 04:00:54.728569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.578 [2024-12-06 04:00:54.730452] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:01.578 [2024-12-06 04:00:54.730497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:01.578 [2024-12-06 04:00:54.730507] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:01.578 [2024-12-06 04:00:54.730516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.578 "name": "Existed_Raid", 00:10:01.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.578 "strip_size_kb": 64, 00:10:01.578 "state": "configuring", 00:10:01.578 "raid_level": "raid0", 00:10:01.578 "superblock": false, 00:10:01.578 "num_base_bdevs": 3, 00:10:01.578 "num_base_bdevs_discovered": 1, 00:10:01.578 "num_base_bdevs_operational": 3, 00:10:01.578 "base_bdevs_list": [ 00:10:01.578 { 00:10:01.578 "name": "BaseBdev1", 00:10:01.578 "uuid": "343d5f39-cef6-4563-b456-d16245a4babf", 00:10:01.578 "is_configured": true, 00:10:01.578 "data_offset": 0, 00:10:01.578 "data_size": 65536 00:10:01.578 }, 00:10:01.578 { 00:10:01.578 "name": "BaseBdev2", 00:10:01.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.578 "is_configured": false, 00:10:01.578 "data_offset": 0, 00:10:01.578 "data_size": 0 00:10:01.578 }, 00:10:01.578 { 00:10:01.578 "name": "BaseBdev3", 00:10:01.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.578 "is_configured": false, 00:10:01.578 "data_offset": 0, 00:10:01.578 "data_size": 0 00:10:01.578 } 00:10:01.578 ] 00:10:01.578 }' 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.578 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.847 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:01.847 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.847 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.106 [2024-12-06 04:00:55.225672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.106 BaseBdev2 00:10:02.106 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.106 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:02.106 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:02.106 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.106 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:02.106 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.106 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.106 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.106 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.106 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.106 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.106 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:02.106 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.106 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.106 [ 00:10:02.106 { 00:10:02.106 "name": "BaseBdev2", 00:10:02.106 "aliases": [ 00:10:02.106 "c1243cf9-18ec-4b5f-9005-ef506d1695bb" 00:10:02.106 ], 00:10:02.106 "product_name": "Malloc disk", 00:10:02.106 "block_size": 512, 00:10:02.106 "num_blocks": 65536, 00:10:02.106 "uuid": "c1243cf9-18ec-4b5f-9005-ef506d1695bb", 00:10:02.106 "assigned_rate_limits": { 00:10:02.106 "rw_ios_per_sec": 0, 00:10:02.106 "rw_mbytes_per_sec": 0, 00:10:02.107 "r_mbytes_per_sec": 0, 00:10:02.107 "w_mbytes_per_sec": 0 00:10:02.107 }, 00:10:02.107 "claimed": true, 00:10:02.107 "claim_type": "exclusive_write", 00:10:02.107 "zoned": false, 00:10:02.107 "supported_io_types": { 00:10:02.107 "read": true, 00:10:02.107 "write": true, 00:10:02.107 "unmap": true, 00:10:02.107 "flush": true, 00:10:02.107 "reset": true, 00:10:02.107 "nvme_admin": false, 00:10:02.107 "nvme_io": false, 00:10:02.107 "nvme_io_md": false, 00:10:02.107 "write_zeroes": true, 00:10:02.107 "zcopy": true, 00:10:02.107 "get_zone_info": false, 00:10:02.107 "zone_management": false, 00:10:02.107 "zone_append": false, 00:10:02.107 "compare": false, 00:10:02.107 "compare_and_write": false, 00:10:02.107 "abort": true, 00:10:02.107 "seek_hole": false, 00:10:02.107 "seek_data": false, 00:10:02.107 "copy": true, 00:10:02.107 "nvme_iov_md": false 00:10:02.107 }, 00:10:02.107 "memory_domains": [ 00:10:02.107 { 00:10:02.107 "dma_device_id": "system", 00:10:02.107 "dma_device_type": 1 00:10:02.107 }, 00:10:02.107 { 00:10:02.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.107 "dma_device_type": 2 00:10:02.107 } 00:10:02.107 ], 00:10:02.107 "driver_specific": {} 00:10:02.107 } 00:10:02.107 ] 00:10:02.107 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.107 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:02.107 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:02.107 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:02.107 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:02.107 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.107 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.107 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.107 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.107 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.107 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.107 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.107 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.107 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.107 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.107 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.107 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.107 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.107 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.107 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.107 "name": "Existed_Raid", 00:10:02.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.107 "strip_size_kb": 64, 00:10:02.107 "state": "configuring", 00:10:02.107 "raid_level": "raid0", 00:10:02.107 "superblock": false, 00:10:02.107 "num_base_bdevs": 3, 00:10:02.107 "num_base_bdevs_discovered": 2, 00:10:02.107 "num_base_bdevs_operational": 3, 00:10:02.107 "base_bdevs_list": [ 00:10:02.107 { 00:10:02.107 "name": "BaseBdev1", 00:10:02.107 "uuid": "343d5f39-cef6-4563-b456-d16245a4babf", 00:10:02.107 "is_configured": true, 00:10:02.107 "data_offset": 0, 00:10:02.107 "data_size": 65536 00:10:02.107 }, 00:10:02.107 { 00:10:02.107 "name": "BaseBdev2", 00:10:02.107 "uuid": "c1243cf9-18ec-4b5f-9005-ef506d1695bb", 00:10:02.107 "is_configured": true, 00:10:02.107 "data_offset": 0, 00:10:02.107 "data_size": 65536 00:10:02.107 }, 00:10:02.107 { 00:10:02.107 "name": "BaseBdev3", 00:10:02.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.107 "is_configured": false, 00:10:02.107 "data_offset": 0, 00:10:02.107 "data_size": 0 00:10:02.107 } 00:10:02.107 ] 00:10:02.107 }' 00:10:02.107 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.107 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.366 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:02.366 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.366 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.624 [2024-12-06 04:00:55.754522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.624 [2024-12-06 04:00:55.754656] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:02.624 [2024-12-06 04:00:55.754677] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:02.624 [2024-12-06 04:00:55.754996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:02.624 [2024-12-06 04:00:55.755219] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:02.624 [2024-12-06 04:00:55.755234] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:02.624 [2024-12-06 04:00:55.755522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.624 BaseBdev3 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.624 [ 00:10:02.624 { 00:10:02.624 "name": "BaseBdev3", 00:10:02.624 "aliases": [ 00:10:02.624 "e158d7d3-12c2-49f0-8dad-6d275f73f532" 00:10:02.624 ], 00:10:02.624 "product_name": "Malloc disk", 00:10:02.624 "block_size": 512, 00:10:02.624 "num_blocks": 65536, 00:10:02.624 "uuid": "e158d7d3-12c2-49f0-8dad-6d275f73f532", 00:10:02.624 "assigned_rate_limits": { 00:10:02.624 "rw_ios_per_sec": 0, 00:10:02.624 "rw_mbytes_per_sec": 0, 00:10:02.624 "r_mbytes_per_sec": 0, 00:10:02.624 "w_mbytes_per_sec": 0 00:10:02.624 }, 00:10:02.624 "claimed": true, 00:10:02.624 "claim_type": "exclusive_write", 00:10:02.624 "zoned": false, 00:10:02.624 "supported_io_types": { 00:10:02.624 "read": true, 00:10:02.624 "write": true, 00:10:02.624 "unmap": true, 00:10:02.624 "flush": true, 00:10:02.624 "reset": true, 00:10:02.624 "nvme_admin": false, 00:10:02.624 "nvme_io": false, 00:10:02.624 "nvme_io_md": false, 00:10:02.624 "write_zeroes": true, 00:10:02.624 "zcopy": true, 00:10:02.624 "get_zone_info": false, 00:10:02.624 "zone_management": false, 00:10:02.624 "zone_append": false, 00:10:02.624 "compare": false, 00:10:02.624 "compare_and_write": false, 00:10:02.624 "abort": true, 00:10:02.624 "seek_hole": false, 00:10:02.624 "seek_data": false, 00:10:02.624 "copy": true, 00:10:02.624 "nvme_iov_md": false 00:10:02.624 }, 00:10:02.624 "memory_domains": [ 00:10:02.624 { 00:10:02.624 "dma_device_id": "system", 00:10:02.624 "dma_device_type": 1 00:10:02.624 }, 00:10:02.624 { 00:10:02.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.624 "dma_device_type": 2 00:10:02.624 } 00:10:02.624 ], 00:10:02.624 "driver_specific": {} 00:10:02.624 } 00:10:02.624 ] 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.624 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.624 "name": "Existed_Raid", 00:10:02.624 "uuid": "ac7ebdae-0476-4442-ac53-98f0a8d075c6", 00:10:02.624 "strip_size_kb": 64, 00:10:02.624 "state": "online", 00:10:02.624 "raid_level": "raid0", 00:10:02.624 "superblock": false, 00:10:02.624 "num_base_bdevs": 3, 00:10:02.624 "num_base_bdevs_discovered": 3, 00:10:02.624 "num_base_bdevs_operational": 3, 00:10:02.624 "base_bdevs_list": [ 00:10:02.624 { 00:10:02.624 "name": "BaseBdev1", 00:10:02.624 "uuid": "343d5f39-cef6-4563-b456-d16245a4babf", 00:10:02.625 "is_configured": true, 00:10:02.625 "data_offset": 0, 00:10:02.625 "data_size": 65536 00:10:02.625 }, 00:10:02.625 { 00:10:02.625 "name": "BaseBdev2", 00:10:02.625 "uuid": "c1243cf9-18ec-4b5f-9005-ef506d1695bb", 00:10:02.625 "is_configured": true, 00:10:02.625 "data_offset": 0, 00:10:02.625 "data_size": 65536 00:10:02.625 }, 00:10:02.625 { 00:10:02.625 "name": "BaseBdev3", 00:10:02.625 "uuid": "e158d7d3-12c2-49f0-8dad-6d275f73f532", 00:10:02.625 "is_configured": true, 00:10:02.625 "data_offset": 0, 00:10:02.625 "data_size": 65536 00:10:02.625 } 00:10:02.625 ] 00:10:02.625 }' 00:10:02.625 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.625 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.964 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:02.964 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:02.964 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:02.964 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:02.964 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:02.964 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:02.964 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:02.964 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:02.964 04:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.964 04:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.964 [2024-12-06 04:00:56.261881] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.964 04:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.239 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:03.239 "name": "Existed_Raid", 00:10:03.239 "aliases": [ 00:10:03.239 "ac7ebdae-0476-4442-ac53-98f0a8d075c6" 00:10:03.239 ], 00:10:03.239 "product_name": "Raid Volume", 00:10:03.239 "block_size": 512, 00:10:03.239 "num_blocks": 196608, 00:10:03.239 "uuid": "ac7ebdae-0476-4442-ac53-98f0a8d075c6", 00:10:03.239 "assigned_rate_limits": { 00:10:03.239 "rw_ios_per_sec": 0, 00:10:03.239 "rw_mbytes_per_sec": 0, 00:10:03.239 "r_mbytes_per_sec": 0, 00:10:03.239 "w_mbytes_per_sec": 0 00:10:03.239 }, 00:10:03.239 "claimed": false, 00:10:03.239 "zoned": false, 00:10:03.239 "supported_io_types": { 00:10:03.239 "read": true, 00:10:03.239 "write": true, 00:10:03.239 "unmap": true, 00:10:03.239 "flush": true, 00:10:03.239 "reset": true, 00:10:03.239 "nvme_admin": false, 00:10:03.239 "nvme_io": false, 00:10:03.239 "nvme_io_md": false, 00:10:03.239 "write_zeroes": true, 00:10:03.239 "zcopy": false, 00:10:03.239 "get_zone_info": false, 00:10:03.239 "zone_management": false, 00:10:03.239 "zone_append": false, 00:10:03.239 "compare": false, 00:10:03.239 "compare_and_write": false, 00:10:03.239 "abort": false, 00:10:03.239 "seek_hole": false, 00:10:03.239 "seek_data": false, 00:10:03.239 "copy": false, 00:10:03.239 "nvme_iov_md": false 00:10:03.239 }, 00:10:03.239 "memory_domains": [ 00:10:03.239 { 00:10:03.239 "dma_device_id": "system", 00:10:03.239 "dma_device_type": 1 00:10:03.239 }, 00:10:03.239 { 00:10:03.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.239 "dma_device_type": 2 00:10:03.239 }, 00:10:03.239 { 00:10:03.239 "dma_device_id": "system", 00:10:03.239 "dma_device_type": 1 00:10:03.239 }, 00:10:03.239 { 00:10:03.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.239 "dma_device_type": 2 00:10:03.239 }, 00:10:03.239 { 00:10:03.239 "dma_device_id": "system", 00:10:03.239 "dma_device_type": 1 00:10:03.239 }, 00:10:03.239 { 00:10:03.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.239 "dma_device_type": 2 00:10:03.239 } 00:10:03.239 ], 00:10:03.239 "driver_specific": { 00:10:03.239 "raid": { 00:10:03.239 "uuid": "ac7ebdae-0476-4442-ac53-98f0a8d075c6", 00:10:03.239 "strip_size_kb": 64, 00:10:03.239 "state": "online", 00:10:03.239 "raid_level": "raid0", 00:10:03.239 "superblock": false, 00:10:03.239 "num_base_bdevs": 3, 00:10:03.239 "num_base_bdevs_discovered": 3, 00:10:03.239 "num_base_bdevs_operational": 3, 00:10:03.239 "base_bdevs_list": [ 00:10:03.239 { 00:10:03.239 "name": "BaseBdev1", 00:10:03.239 "uuid": "343d5f39-cef6-4563-b456-d16245a4babf", 00:10:03.239 "is_configured": true, 00:10:03.239 "data_offset": 0, 00:10:03.239 "data_size": 65536 00:10:03.239 }, 00:10:03.239 { 00:10:03.239 "name": "BaseBdev2", 00:10:03.239 "uuid": "c1243cf9-18ec-4b5f-9005-ef506d1695bb", 00:10:03.239 "is_configured": true, 00:10:03.239 "data_offset": 0, 00:10:03.239 "data_size": 65536 00:10:03.240 }, 00:10:03.240 { 00:10:03.240 "name": "BaseBdev3", 00:10:03.240 "uuid": "e158d7d3-12c2-49f0-8dad-6d275f73f532", 00:10:03.240 "is_configured": true, 00:10:03.240 "data_offset": 0, 00:10:03.240 "data_size": 65536 00:10:03.240 } 00:10:03.240 ] 00:10:03.240 } 00:10:03.240 } 00:10:03.240 }' 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:03.240 BaseBdev2 00:10:03.240 BaseBdev3' 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.240 04:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.240 [2024-12-06 04:00:56.561130] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:03.240 [2024-12-06 04:00:56.561205] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:03.240 [2024-12-06 04:00:56.561287] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.498 "name": "Existed_Raid", 00:10:03.498 "uuid": "ac7ebdae-0476-4442-ac53-98f0a8d075c6", 00:10:03.498 "strip_size_kb": 64, 00:10:03.498 "state": "offline", 00:10:03.498 "raid_level": "raid0", 00:10:03.498 "superblock": false, 00:10:03.498 "num_base_bdevs": 3, 00:10:03.498 "num_base_bdevs_discovered": 2, 00:10:03.498 "num_base_bdevs_operational": 2, 00:10:03.498 "base_bdevs_list": [ 00:10:03.498 { 00:10:03.498 "name": null, 00:10:03.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.498 "is_configured": false, 00:10:03.498 "data_offset": 0, 00:10:03.498 "data_size": 65536 00:10:03.498 }, 00:10:03.498 { 00:10:03.498 "name": "BaseBdev2", 00:10:03.498 "uuid": "c1243cf9-18ec-4b5f-9005-ef506d1695bb", 00:10:03.498 "is_configured": true, 00:10:03.498 "data_offset": 0, 00:10:03.498 "data_size": 65536 00:10:03.498 }, 00:10:03.498 { 00:10:03.498 "name": "BaseBdev3", 00:10:03.498 "uuid": "e158d7d3-12c2-49f0-8dad-6d275f73f532", 00:10:03.498 "is_configured": true, 00:10:03.498 "data_offset": 0, 00:10:03.498 "data_size": 65536 00:10:03.498 } 00:10:03.498 ] 00:10:03.498 }' 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.498 04:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.758 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:03.758 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:03.758 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:03.758 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.758 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.758 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.017 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.017 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:04.017 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:04.017 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:04.017 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.017 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.017 [2024-12-06 04:00:57.130305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:04.017 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.017 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:04.017 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:04.017 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.017 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:04.017 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.017 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.017 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.017 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:04.017 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:04.017 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:04.017 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.017 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.017 [2024-12-06 04:00:57.280839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:04.017 [2024-12-06 04:00:57.280889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.276 BaseBdev2 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.276 [ 00:10:04.276 { 00:10:04.276 "name": "BaseBdev2", 00:10:04.276 "aliases": [ 00:10:04.276 "7db6cbff-4e16-47f4-a262-1ccc59754362" 00:10:04.276 ], 00:10:04.276 "product_name": "Malloc disk", 00:10:04.276 "block_size": 512, 00:10:04.276 "num_blocks": 65536, 00:10:04.276 "uuid": "7db6cbff-4e16-47f4-a262-1ccc59754362", 00:10:04.276 "assigned_rate_limits": { 00:10:04.276 "rw_ios_per_sec": 0, 00:10:04.276 "rw_mbytes_per_sec": 0, 00:10:04.276 "r_mbytes_per_sec": 0, 00:10:04.276 "w_mbytes_per_sec": 0 00:10:04.276 }, 00:10:04.276 "claimed": false, 00:10:04.276 "zoned": false, 00:10:04.276 "supported_io_types": { 00:10:04.276 "read": true, 00:10:04.276 "write": true, 00:10:04.276 "unmap": true, 00:10:04.276 "flush": true, 00:10:04.276 "reset": true, 00:10:04.276 "nvme_admin": false, 00:10:04.276 "nvme_io": false, 00:10:04.276 "nvme_io_md": false, 00:10:04.276 "write_zeroes": true, 00:10:04.276 "zcopy": true, 00:10:04.276 "get_zone_info": false, 00:10:04.276 "zone_management": false, 00:10:04.276 "zone_append": false, 00:10:04.276 "compare": false, 00:10:04.276 "compare_and_write": false, 00:10:04.276 "abort": true, 00:10:04.276 "seek_hole": false, 00:10:04.276 "seek_data": false, 00:10:04.276 "copy": true, 00:10:04.276 "nvme_iov_md": false 00:10:04.276 }, 00:10:04.276 "memory_domains": [ 00:10:04.276 { 00:10:04.276 "dma_device_id": "system", 00:10:04.276 "dma_device_type": 1 00:10:04.276 }, 00:10:04.276 { 00:10:04.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.276 "dma_device_type": 2 00:10:04.276 } 00:10:04.276 ], 00:10:04.276 "driver_specific": {} 00:10:04.276 } 00:10:04.276 ] 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.276 BaseBdev3 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.276 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.277 [ 00:10:04.277 { 00:10:04.277 "name": "BaseBdev3", 00:10:04.277 "aliases": [ 00:10:04.277 "6d85ff25-8d1d-4300-ae4c-1d7769095bde" 00:10:04.277 ], 00:10:04.277 "product_name": "Malloc disk", 00:10:04.277 "block_size": 512, 00:10:04.277 "num_blocks": 65536, 00:10:04.277 "uuid": "6d85ff25-8d1d-4300-ae4c-1d7769095bde", 00:10:04.277 "assigned_rate_limits": { 00:10:04.277 "rw_ios_per_sec": 0, 00:10:04.277 "rw_mbytes_per_sec": 0, 00:10:04.277 "r_mbytes_per_sec": 0, 00:10:04.277 "w_mbytes_per_sec": 0 00:10:04.277 }, 00:10:04.277 "claimed": false, 00:10:04.277 "zoned": false, 00:10:04.277 "supported_io_types": { 00:10:04.277 "read": true, 00:10:04.277 "write": true, 00:10:04.277 "unmap": true, 00:10:04.277 "flush": true, 00:10:04.277 "reset": true, 00:10:04.277 "nvme_admin": false, 00:10:04.277 "nvme_io": false, 00:10:04.277 "nvme_io_md": false, 00:10:04.277 "write_zeroes": true, 00:10:04.277 "zcopy": true, 00:10:04.277 "get_zone_info": false, 00:10:04.277 "zone_management": false, 00:10:04.277 "zone_append": false, 00:10:04.277 "compare": false, 00:10:04.277 "compare_and_write": false, 00:10:04.277 "abort": true, 00:10:04.277 "seek_hole": false, 00:10:04.277 "seek_data": false, 00:10:04.277 "copy": true, 00:10:04.277 "nvme_iov_md": false 00:10:04.277 }, 00:10:04.277 "memory_domains": [ 00:10:04.277 { 00:10:04.277 "dma_device_id": "system", 00:10:04.277 "dma_device_type": 1 00:10:04.277 }, 00:10:04.277 { 00:10:04.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.277 "dma_device_type": 2 00:10:04.277 } 00:10:04.277 ], 00:10:04.277 "driver_specific": {} 00:10:04.277 } 00:10:04.277 ] 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.277 [2024-12-06 04:00:57.595791] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:04.277 [2024-12-06 04:00:57.595872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:04.277 [2024-12-06 04:00:57.595912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:04.277 [2024-12-06 04:00:57.597771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.277 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.536 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.536 "name": "Existed_Raid", 00:10:04.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.536 "strip_size_kb": 64, 00:10:04.536 "state": "configuring", 00:10:04.536 "raid_level": "raid0", 00:10:04.536 "superblock": false, 00:10:04.536 "num_base_bdevs": 3, 00:10:04.536 "num_base_bdevs_discovered": 2, 00:10:04.536 "num_base_bdevs_operational": 3, 00:10:04.536 "base_bdevs_list": [ 00:10:04.536 { 00:10:04.536 "name": "BaseBdev1", 00:10:04.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.536 "is_configured": false, 00:10:04.536 "data_offset": 0, 00:10:04.536 "data_size": 0 00:10:04.536 }, 00:10:04.536 { 00:10:04.536 "name": "BaseBdev2", 00:10:04.536 "uuid": "7db6cbff-4e16-47f4-a262-1ccc59754362", 00:10:04.536 "is_configured": true, 00:10:04.536 "data_offset": 0, 00:10:04.536 "data_size": 65536 00:10:04.536 }, 00:10:04.536 { 00:10:04.536 "name": "BaseBdev3", 00:10:04.536 "uuid": "6d85ff25-8d1d-4300-ae4c-1d7769095bde", 00:10:04.536 "is_configured": true, 00:10:04.536 "data_offset": 0, 00:10:04.536 "data_size": 65536 00:10:04.536 } 00:10:04.536 ] 00:10:04.536 }' 00:10:04.536 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.536 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.795 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:04.795 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.795 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.795 [2024-12-06 04:00:58.027094] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:04.795 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.795 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:04.795 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.795 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.795 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.795 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.795 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.795 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.795 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.795 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.795 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.795 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.795 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.795 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.796 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.796 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.796 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.796 "name": "Existed_Raid", 00:10:04.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.796 "strip_size_kb": 64, 00:10:04.796 "state": "configuring", 00:10:04.796 "raid_level": "raid0", 00:10:04.796 "superblock": false, 00:10:04.796 "num_base_bdevs": 3, 00:10:04.796 "num_base_bdevs_discovered": 1, 00:10:04.796 "num_base_bdevs_operational": 3, 00:10:04.796 "base_bdevs_list": [ 00:10:04.796 { 00:10:04.796 "name": "BaseBdev1", 00:10:04.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.796 "is_configured": false, 00:10:04.796 "data_offset": 0, 00:10:04.796 "data_size": 0 00:10:04.796 }, 00:10:04.796 { 00:10:04.796 "name": null, 00:10:04.796 "uuid": "7db6cbff-4e16-47f4-a262-1ccc59754362", 00:10:04.796 "is_configured": false, 00:10:04.796 "data_offset": 0, 00:10:04.796 "data_size": 65536 00:10:04.796 }, 00:10:04.796 { 00:10:04.796 "name": "BaseBdev3", 00:10:04.796 "uuid": "6d85ff25-8d1d-4300-ae4c-1d7769095bde", 00:10:04.796 "is_configured": true, 00:10:04.796 "data_offset": 0, 00:10:04.796 "data_size": 65536 00:10:04.796 } 00:10:04.796 ] 00:10:04.796 }' 00:10:04.796 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.796 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.364 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:05.364 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.364 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.364 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.364 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.364 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:05.364 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:05.364 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.364 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.364 [2024-12-06 04:00:58.543521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.364 BaseBdev1 00:10:05.364 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.364 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:05.364 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:05.364 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.364 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:05.364 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.364 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.364 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.364 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.364 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.364 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.364 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:05.364 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.364 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.364 [ 00:10:05.364 { 00:10:05.364 "name": "BaseBdev1", 00:10:05.364 "aliases": [ 00:10:05.364 "c86cf833-178d-4d80-a3fc-bb8647eb2b3b" 00:10:05.364 ], 00:10:05.364 "product_name": "Malloc disk", 00:10:05.364 "block_size": 512, 00:10:05.364 "num_blocks": 65536, 00:10:05.364 "uuid": "c86cf833-178d-4d80-a3fc-bb8647eb2b3b", 00:10:05.364 "assigned_rate_limits": { 00:10:05.364 "rw_ios_per_sec": 0, 00:10:05.364 "rw_mbytes_per_sec": 0, 00:10:05.364 "r_mbytes_per_sec": 0, 00:10:05.364 "w_mbytes_per_sec": 0 00:10:05.364 }, 00:10:05.364 "claimed": true, 00:10:05.364 "claim_type": "exclusive_write", 00:10:05.364 "zoned": false, 00:10:05.364 "supported_io_types": { 00:10:05.364 "read": true, 00:10:05.364 "write": true, 00:10:05.365 "unmap": true, 00:10:05.365 "flush": true, 00:10:05.365 "reset": true, 00:10:05.365 "nvme_admin": false, 00:10:05.365 "nvme_io": false, 00:10:05.365 "nvme_io_md": false, 00:10:05.365 "write_zeroes": true, 00:10:05.365 "zcopy": true, 00:10:05.365 "get_zone_info": false, 00:10:05.365 "zone_management": false, 00:10:05.365 "zone_append": false, 00:10:05.365 "compare": false, 00:10:05.365 "compare_and_write": false, 00:10:05.365 "abort": true, 00:10:05.365 "seek_hole": false, 00:10:05.365 "seek_data": false, 00:10:05.365 "copy": true, 00:10:05.365 "nvme_iov_md": false 00:10:05.365 }, 00:10:05.365 "memory_domains": [ 00:10:05.365 { 00:10:05.365 "dma_device_id": "system", 00:10:05.365 "dma_device_type": 1 00:10:05.365 }, 00:10:05.365 { 00:10:05.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.365 "dma_device_type": 2 00:10:05.365 } 00:10:05.365 ], 00:10:05.365 "driver_specific": {} 00:10:05.365 } 00:10:05.365 ] 00:10:05.365 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.365 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:05.365 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:05.365 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.365 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.365 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.365 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.365 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.365 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.365 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.365 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.365 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.365 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.365 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.365 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.365 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.365 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.365 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.365 "name": "Existed_Raid", 00:10:05.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.365 "strip_size_kb": 64, 00:10:05.365 "state": "configuring", 00:10:05.365 "raid_level": "raid0", 00:10:05.365 "superblock": false, 00:10:05.365 "num_base_bdevs": 3, 00:10:05.365 "num_base_bdevs_discovered": 2, 00:10:05.365 "num_base_bdevs_operational": 3, 00:10:05.365 "base_bdevs_list": [ 00:10:05.365 { 00:10:05.365 "name": "BaseBdev1", 00:10:05.365 "uuid": "c86cf833-178d-4d80-a3fc-bb8647eb2b3b", 00:10:05.365 "is_configured": true, 00:10:05.365 "data_offset": 0, 00:10:05.365 "data_size": 65536 00:10:05.365 }, 00:10:05.365 { 00:10:05.365 "name": null, 00:10:05.365 "uuid": "7db6cbff-4e16-47f4-a262-1ccc59754362", 00:10:05.365 "is_configured": false, 00:10:05.365 "data_offset": 0, 00:10:05.365 "data_size": 65536 00:10:05.365 }, 00:10:05.365 { 00:10:05.365 "name": "BaseBdev3", 00:10:05.365 "uuid": "6d85ff25-8d1d-4300-ae4c-1d7769095bde", 00:10:05.365 "is_configured": true, 00:10:05.365 "data_offset": 0, 00:10:05.365 "data_size": 65536 00:10:05.365 } 00:10:05.365 ] 00:10:05.365 }' 00:10:05.365 04:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.365 04:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.933 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.934 [2024-12-06 04:00:59.066675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.934 "name": "Existed_Raid", 00:10:05.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.934 "strip_size_kb": 64, 00:10:05.934 "state": "configuring", 00:10:05.934 "raid_level": "raid0", 00:10:05.934 "superblock": false, 00:10:05.934 "num_base_bdevs": 3, 00:10:05.934 "num_base_bdevs_discovered": 1, 00:10:05.934 "num_base_bdevs_operational": 3, 00:10:05.934 "base_bdevs_list": [ 00:10:05.934 { 00:10:05.934 "name": "BaseBdev1", 00:10:05.934 "uuid": "c86cf833-178d-4d80-a3fc-bb8647eb2b3b", 00:10:05.934 "is_configured": true, 00:10:05.934 "data_offset": 0, 00:10:05.934 "data_size": 65536 00:10:05.934 }, 00:10:05.934 { 00:10:05.934 "name": null, 00:10:05.934 "uuid": "7db6cbff-4e16-47f4-a262-1ccc59754362", 00:10:05.934 "is_configured": false, 00:10:05.934 "data_offset": 0, 00:10:05.934 "data_size": 65536 00:10:05.934 }, 00:10:05.934 { 00:10:05.934 "name": null, 00:10:05.934 "uuid": "6d85ff25-8d1d-4300-ae4c-1d7769095bde", 00:10:05.934 "is_configured": false, 00:10:05.934 "data_offset": 0, 00:10:05.934 "data_size": 65536 00:10:05.934 } 00:10:05.934 ] 00:10:05.934 }' 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.934 04:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.502 [2024-12-06 04:00:59.605786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.502 "name": "Existed_Raid", 00:10:06.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.502 "strip_size_kb": 64, 00:10:06.502 "state": "configuring", 00:10:06.502 "raid_level": "raid0", 00:10:06.502 "superblock": false, 00:10:06.502 "num_base_bdevs": 3, 00:10:06.502 "num_base_bdevs_discovered": 2, 00:10:06.502 "num_base_bdevs_operational": 3, 00:10:06.502 "base_bdevs_list": [ 00:10:06.502 { 00:10:06.502 "name": "BaseBdev1", 00:10:06.502 "uuid": "c86cf833-178d-4d80-a3fc-bb8647eb2b3b", 00:10:06.502 "is_configured": true, 00:10:06.502 "data_offset": 0, 00:10:06.502 "data_size": 65536 00:10:06.502 }, 00:10:06.502 { 00:10:06.502 "name": null, 00:10:06.502 "uuid": "7db6cbff-4e16-47f4-a262-1ccc59754362", 00:10:06.502 "is_configured": false, 00:10:06.502 "data_offset": 0, 00:10:06.502 "data_size": 65536 00:10:06.502 }, 00:10:06.502 { 00:10:06.502 "name": "BaseBdev3", 00:10:06.502 "uuid": "6d85ff25-8d1d-4300-ae4c-1d7769095bde", 00:10:06.502 "is_configured": true, 00:10:06.502 "data_offset": 0, 00:10:06.502 "data_size": 65536 00:10:06.502 } 00:10:06.502 ] 00:10:06.502 }' 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.502 04:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.778 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.779 04:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.779 04:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.779 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:06.779 04:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.779 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:06.779 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:06.779 04:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.779 04:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.779 [2024-12-06 04:01:00.100971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:07.041 04:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.041 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:07.041 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.041 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.041 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.041 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.041 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.041 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.041 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.042 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.042 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.042 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.042 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.042 04:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.042 04:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.042 04:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.042 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.042 "name": "Existed_Raid", 00:10:07.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.042 "strip_size_kb": 64, 00:10:07.042 "state": "configuring", 00:10:07.042 "raid_level": "raid0", 00:10:07.042 "superblock": false, 00:10:07.042 "num_base_bdevs": 3, 00:10:07.042 "num_base_bdevs_discovered": 1, 00:10:07.042 "num_base_bdevs_operational": 3, 00:10:07.042 "base_bdevs_list": [ 00:10:07.042 { 00:10:07.042 "name": null, 00:10:07.042 "uuid": "c86cf833-178d-4d80-a3fc-bb8647eb2b3b", 00:10:07.042 "is_configured": false, 00:10:07.042 "data_offset": 0, 00:10:07.042 "data_size": 65536 00:10:07.042 }, 00:10:07.042 { 00:10:07.042 "name": null, 00:10:07.042 "uuid": "7db6cbff-4e16-47f4-a262-1ccc59754362", 00:10:07.042 "is_configured": false, 00:10:07.042 "data_offset": 0, 00:10:07.042 "data_size": 65536 00:10:07.042 }, 00:10:07.042 { 00:10:07.042 "name": "BaseBdev3", 00:10:07.042 "uuid": "6d85ff25-8d1d-4300-ae4c-1d7769095bde", 00:10:07.042 "is_configured": true, 00:10:07.042 "data_offset": 0, 00:10:07.042 "data_size": 65536 00:10:07.042 } 00:10:07.042 ] 00:10:07.042 }' 00:10:07.042 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.042 04:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.301 [2024-12-06 04:01:00.636304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.301 04:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.560 04:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.560 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.560 "name": "Existed_Raid", 00:10:07.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.560 "strip_size_kb": 64, 00:10:07.560 "state": "configuring", 00:10:07.560 "raid_level": "raid0", 00:10:07.560 "superblock": false, 00:10:07.560 "num_base_bdevs": 3, 00:10:07.560 "num_base_bdevs_discovered": 2, 00:10:07.560 "num_base_bdevs_operational": 3, 00:10:07.560 "base_bdevs_list": [ 00:10:07.560 { 00:10:07.560 "name": null, 00:10:07.560 "uuid": "c86cf833-178d-4d80-a3fc-bb8647eb2b3b", 00:10:07.560 "is_configured": false, 00:10:07.560 "data_offset": 0, 00:10:07.560 "data_size": 65536 00:10:07.560 }, 00:10:07.560 { 00:10:07.560 "name": "BaseBdev2", 00:10:07.560 "uuid": "7db6cbff-4e16-47f4-a262-1ccc59754362", 00:10:07.560 "is_configured": true, 00:10:07.560 "data_offset": 0, 00:10:07.560 "data_size": 65536 00:10:07.560 }, 00:10:07.560 { 00:10:07.560 "name": "BaseBdev3", 00:10:07.560 "uuid": "6d85ff25-8d1d-4300-ae4c-1d7769095bde", 00:10:07.560 "is_configured": true, 00:10:07.560 "data_offset": 0, 00:10:07.560 "data_size": 65536 00:10:07.560 } 00:10:07.560 ] 00:10:07.560 }' 00:10:07.560 04:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.560 04:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.819 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:07.819 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.819 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.819 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.819 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.819 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:08.078 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.078 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:08.078 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.078 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.078 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.078 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c86cf833-178d-4d80-a3fc-bb8647eb2b3b 00:10:08.078 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.078 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.078 [2024-12-06 04:01:01.256540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:08.078 [2024-12-06 04:01:01.256660] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:08.078 [2024-12-06 04:01:01.256689] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:08.078 [2024-12-06 04:01:01.256978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:08.078 [2024-12-06 04:01:01.257199] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:08.078 [2024-12-06 04:01:01.257243] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:08.078 [2024-12-06 04:01:01.257529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.078 NewBaseBdev 00:10:08.078 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.078 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:08.078 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:08.078 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:08.078 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:08.078 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:08.078 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:08.078 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:08.078 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.078 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.078 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.078 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:08.078 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.079 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.079 [ 00:10:08.079 { 00:10:08.079 "name": "NewBaseBdev", 00:10:08.079 "aliases": [ 00:10:08.079 "c86cf833-178d-4d80-a3fc-bb8647eb2b3b" 00:10:08.079 ], 00:10:08.079 "product_name": "Malloc disk", 00:10:08.079 "block_size": 512, 00:10:08.079 "num_blocks": 65536, 00:10:08.079 "uuid": "c86cf833-178d-4d80-a3fc-bb8647eb2b3b", 00:10:08.079 "assigned_rate_limits": { 00:10:08.079 "rw_ios_per_sec": 0, 00:10:08.079 "rw_mbytes_per_sec": 0, 00:10:08.079 "r_mbytes_per_sec": 0, 00:10:08.079 "w_mbytes_per_sec": 0 00:10:08.079 }, 00:10:08.079 "claimed": true, 00:10:08.079 "claim_type": "exclusive_write", 00:10:08.079 "zoned": false, 00:10:08.079 "supported_io_types": { 00:10:08.079 "read": true, 00:10:08.079 "write": true, 00:10:08.079 "unmap": true, 00:10:08.079 "flush": true, 00:10:08.079 "reset": true, 00:10:08.079 "nvme_admin": false, 00:10:08.079 "nvme_io": false, 00:10:08.079 "nvme_io_md": false, 00:10:08.079 "write_zeroes": true, 00:10:08.079 "zcopy": true, 00:10:08.079 "get_zone_info": false, 00:10:08.079 "zone_management": false, 00:10:08.079 "zone_append": false, 00:10:08.079 "compare": false, 00:10:08.079 "compare_and_write": false, 00:10:08.079 "abort": true, 00:10:08.079 "seek_hole": false, 00:10:08.079 "seek_data": false, 00:10:08.079 "copy": true, 00:10:08.079 "nvme_iov_md": false 00:10:08.079 }, 00:10:08.079 "memory_domains": [ 00:10:08.079 { 00:10:08.079 "dma_device_id": "system", 00:10:08.079 "dma_device_type": 1 00:10:08.079 }, 00:10:08.079 { 00:10:08.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.079 "dma_device_type": 2 00:10:08.079 } 00:10:08.079 ], 00:10:08.079 "driver_specific": {} 00:10:08.079 } 00:10:08.079 ] 00:10:08.079 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.079 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:08.079 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:08.079 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.079 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.079 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.079 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.079 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.079 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.079 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.079 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.079 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.079 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.079 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.079 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.079 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.079 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.079 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.079 "name": "Existed_Raid", 00:10:08.079 "uuid": "fe206e3f-cd87-45e0-ab6a-5ea65fe41354", 00:10:08.079 "strip_size_kb": 64, 00:10:08.079 "state": "online", 00:10:08.079 "raid_level": "raid0", 00:10:08.079 "superblock": false, 00:10:08.079 "num_base_bdevs": 3, 00:10:08.079 "num_base_bdevs_discovered": 3, 00:10:08.079 "num_base_bdevs_operational": 3, 00:10:08.079 "base_bdevs_list": [ 00:10:08.079 { 00:10:08.079 "name": "NewBaseBdev", 00:10:08.079 "uuid": "c86cf833-178d-4d80-a3fc-bb8647eb2b3b", 00:10:08.079 "is_configured": true, 00:10:08.079 "data_offset": 0, 00:10:08.079 "data_size": 65536 00:10:08.079 }, 00:10:08.079 { 00:10:08.079 "name": "BaseBdev2", 00:10:08.079 "uuid": "7db6cbff-4e16-47f4-a262-1ccc59754362", 00:10:08.079 "is_configured": true, 00:10:08.079 "data_offset": 0, 00:10:08.079 "data_size": 65536 00:10:08.079 }, 00:10:08.079 { 00:10:08.079 "name": "BaseBdev3", 00:10:08.079 "uuid": "6d85ff25-8d1d-4300-ae4c-1d7769095bde", 00:10:08.079 "is_configured": true, 00:10:08.079 "data_offset": 0, 00:10:08.079 "data_size": 65536 00:10:08.079 } 00:10:08.079 ] 00:10:08.079 }' 00:10:08.079 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.079 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.646 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:08.646 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:08.646 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:08.646 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:08.646 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:08.646 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:08.646 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:08.646 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:08.646 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.646 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.646 [2024-12-06 04:01:01.760081] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.646 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.646 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:08.646 "name": "Existed_Raid", 00:10:08.646 "aliases": [ 00:10:08.646 "fe206e3f-cd87-45e0-ab6a-5ea65fe41354" 00:10:08.646 ], 00:10:08.646 "product_name": "Raid Volume", 00:10:08.646 "block_size": 512, 00:10:08.646 "num_blocks": 196608, 00:10:08.646 "uuid": "fe206e3f-cd87-45e0-ab6a-5ea65fe41354", 00:10:08.646 "assigned_rate_limits": { 00:10:08.646 "rw_ios_per_sec": 0, 00:10:08.646 "rw_mbytes_per_sec": 0, 00:10:08.646 "r_mbytes_per_sec": 0, 00:10:08.646 "w_mbytes_per_sec": 0 00:10:08.646 }, 00:10:08.646 "claimed": false, 00:10:08.646 "zoned": false, 00:10:08.646 "supported_io_types": { 00:10:08.646 "read": true, 00:10:08.646 "write": true, 00:10:08.646 "unmap": true, 00:10:08.646 "flush": true, 00:10:08.646 "reset": true, 00:10:08.646 "nvme_admin": false, 00:10:08.646 "nvme_io": false, 00:10:08.646 "nvme_io_md": false, 00:10:08.646 "write_zeroes": true, 00:10:08.646 "zcopy": false, 00:10:08.646 "get_zone_info": false, 00:10:08.646 "zone_management": false, 00:10:08.646 "zone_append": false, 00:10:08.646 "compare": false, 00:10:08.646 "compare_and_write": false, 00:10:08.646 "abort": false, 00:10:08.646 "seek_hole": false, 00:10:08.646 "seek_data": false, 00:10:08.646 "copy": false, 00:10:08.646 "nvme_iov_md": false 00:10:08.646 }, 00:10:08.646 "memory_domains": [ 00:10:08.646 { 00:10:08.646 "dma_device_id": "system", 00:10:08.646 "dma_device_type": 1 00:10:08.646 }, 00:10:08.646 { 00:10:08.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.646 "dma_device_type": 2 00:10:08.647 }, 00:10:08.647 { 00:10:08.647 "dma_device_id": "system", 00:10:08.647 "dma_device_type": 1 00:10:08.647 }, 00:10:08.647 { 00:10:08.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.647 "dma_device_type": 2 00:10:08.647 }, 00:10:08.647 { 00:10:08.647 "dma_device_id": "system", 00:10:08.647 "dma_device_type": 1 00:10:08.647 }, 00:10:08.647 { 00:10:08.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.647 "dma_device_type": 2 00:10:08.647 } 00:10:08.647 ], 00:10:08.647 "driver_specific": { 00:10:08.647 "raid": { 00:10:08.647 "uuid": "fe206e3f-cd87-45e0-ab6a-5ea65fe41354", 00:10:08.647 "strip_size_kb": 64, 00:10:08.647 "state": "online", 00:10:08.647 "raid_level": "raid0", 00:10:08.647 "superblock": false, 00:10:08.647 "num_base_bdevs": 3, 00:10:08.647 "num_base_bdevs_discovered": 3, 00:10:08.647 "num_base_bdevs_operational": 3, 00:10:08.647 "base_bdevs_list": [ 00:10:08.647 { 00:10:08.647 "name": "NewBaseBdev", 00:10:08.647 "uuid": "c86cf833-178d-4d80-a3fc-bb8647eb2b3b", 00:10:08.647 "is_configured": true, 00:10:08.647 "data_offset": 0, 00:10:08.647 "data_size": 65536 00:10:08.647 }, 00:10:08.647 { 00:10:08.647 "name": "BaseBdev2", 00:10:08.647 "uuid": "7db6cbff-4e16-47f4-a262-1ccc59754362", 00:10:08.647 "is_configured": true, 00:10:08.647 "data_offset": 0, 00:10:08.647 "data_size": 65536 00:10:08.647 }, 00:10:08.647 { 00:10:08.647 "name": "BaseBdev3", 00:10:08.647 "uuid": "6d85ff25-8d1d-4300-ae4c-1d7769095bde", 00:10:08.647 "is_configured": true, 00:10:08.647 "data_offset": 0, 00:10:08.647 "data_size": 65536 00:10:08.647 } 00:10:08.647 ] 00:10:08.647 } 00:10:08.647 } 00:10:08.647 }' 00:10:08.647 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:08.647 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:08.647 BaseBdev2 00:10:08.647 BaseBdev3' 00:10:08.647 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.647 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:08.647 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.647 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:08.647 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.647 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.647 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.647 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.647 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.647 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.647 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.647 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:08.647 04:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.647 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.647 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.647 04:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.907 [2024-12-06 04:01:02.059248] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:08.907 [2024-12-06 04:01:02.059323] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:08.907 [2024-12-06 04:01:02.059444] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.907 [2024-12-06 04:01:02.059525] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:08.907 [2024-12-06 04:01:02.059571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63910 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63910 ']' 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63910 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63910 00:10:08.907 killing process with pid 63910 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63910' 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63910 00:10:08.907 [2024-12-06 04:01:02.108695] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:08.907 04:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63910 00:10:09.166 [2024-12-06 04:01:02.415970] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:10.544 00:10:10.544 real 0m10.807s 00:10:10.544 user 0m17.245s 00:10:10.544 sys 0m1.875s 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.544 ************************************ 00:10:10.544 END TEST raid_state_function_test 00:10:10.544 ************************************ 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.544 04:01:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:10:10.544 04:01:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:10.544 04:01:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.544 04:01:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:10.544 ************************************ 00:10:10.544 START TEST raid_state_function_test_sb 00:10:10.544 ************************************ 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:10.544 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:10.545 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:10.545 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:10.545 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:10.545 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:10.545 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:10.545 Process raid pid: 64531 00:10:10.545 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64531 00:10:10.545 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:10.545 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64531' 00:10:10.545 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64531 00:10:10.545 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64531 ']' 00:10:10.545 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.545 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.545 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.545 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.545 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.545 [2024-12-06 04:01:03.731284] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:10:10.545 [2024-12-06 04:01:03.731509] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.804 [2024-12-06 04:01:03.909016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.804 [2024-12-06 04:01:04.032033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.063 [2024-12-06 04:01:04.235374] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.063 [2024-12-06 04:01:04.235423] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.323 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.323 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:11.323 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:11.323 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.323 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.323 [2024-12-06 04:01:04.647138] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:11.323 [2024-12-06 04:01:04.647192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:11.323 [2024-12-06 04:01:04.647204] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:11.323 [2024-12-06 04:01:04.647213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:11.323 [2024-12-06 04:01:04.647220] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:11.323 [2024-12-06 04:01:04.647229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:11.323 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.323 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:11.323 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.323 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.323 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.323 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.323 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.323 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.323 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.323 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.323 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.323 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.323 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.323 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.323 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.323 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.583 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.583 "name": "Existed_Raid", 00:10:11.583 "uuid": "20e7581e-501b-439f-b27f-ad568bb4d270", 00:10:11.583 "strip_size_kb": 64, 00:10:11.583 "state": "configuring", 00:10:11.583 "raid_level": "raid0", 00:10:11.583 "superblock": true, 00:10:11.583 "num_base_bdevs": 3, 00:10:11.583 "num_base_bdevs_discovered": 0, 00:10:11.583 "num_base_bdevs_operational": 3, 00:10:11.583 "base_bdevs_list": [ 00:10:11.583 { 00:10:11.583 "name": "BaseBdev1", 00:10:11.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.583 "is_configured": false, 00:10:11.583 "data_offset": 0, 00:10:11.583 "data_size": 0 00:10:11.583 }, 00:10:11.583 { 00:10:11.583 "name": "BaseBdev2", 00:10:11.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.583 "is_configured": false, 00:10:11.583 "data_offset": 0, 00:10:11.583 "data_size": 0 00:10:11.583 }, 00:10:11.583 { 00:10:11.583 "name": "BaseBdev3", 00:10:11.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.583 "is_configured": false, 00:10:11.583 "data_offset": 0, 00:10:11.583 "data_size": 0 00:10:11.583 } 00:10:11.583 ] 00:10:11.583 }' 00:10:11.583 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.583 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.842 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:11.842 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.842 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.842 [2024-12-06 04:01:05.134241] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.842 [2024-12-06 04:01:05.134354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:11.842 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.842 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:11.842 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.842 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.842 [2024-12-06 04:01:05.146239] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:11.842 [2024-12-06 04:01:05.146339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:11.842 [2024-12-06 04:01:05.146368] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:11.842 [2024-12-06 04:01:05.146392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:11.842 [2024-12-06 04:01:05.146411] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:11.842 [2024-12-06 04:01:05.146431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:11.842 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.842 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:11.842 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.842 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.842 [2024-12-06 04:01:05.194803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.100 BaseBdev1 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.100 [ 00:10:12.100 { 00:10:12.100 "name": "BaseBdev1", 00:10:12.100 "aliases": [ 00:10:12.100 "8da4544d-cc59-4640-9fe4-69414ee6180e" 00:10:12.100 ], 00:10:12.100 "product_name": "Malloc disk", 00:10:12.100 "block_size": 512, 00:10:12.100 "num_blocks": 65536, 00:10:12.100 "uuid": "8da4544d-cc59-4640-9fe4-69414ee6180e", 00:10:12.100 "assigned_rate_limits": { 00:10:12.100 "rw_ios_per_sec": 0, 00:10:12.100 "rw_mbytes_per_sec": 0, 00:10:12.100 "r_mbytes_per_sec": 0, 00:10:12.100 "w_mbytes_per_sec": 0 00:10:12.100 }, 00:10:12.100 "claimed": true, 00:10:12.100 "claim_type": "exclusive_write", 00:10:12.100 "zoned": false, 00:10:12.100 "supported_io_types": { 00:10:12.100 "read": true, 00:10:12.100 "write": true, 00:10:12.100 "unmap": true, 00:10:12.100 "flush": true, 00:10:12.100 "reset": true, 00:10:12.100 "nvme_admin": false, 00:10:12.100 "nvme_io": false, 00:10:12.100 "nvme_io_md": false, 00:10:12.100 "write_zeroes": true, 00:10:12.100 "zcopy": true, 00:10:12.100 "get_zone_info": false, 00:10:12.100 "zone_management": false, 00:10:12.100 "zone_append": false, 00:10:12.100 "compare": false, 00:10:12.100 "compare_and_write": false, 00:10:12.100 "abort": true, 00:10:12.100 "seek_hole": false, 00:10:12.100 "seek_data": false, 00:10:12.100 "copy": true, 00:10:12.100 "nvme_iov_md": false 00:10:12.100 }, 00:10:12.100 "memory_domains": [ 00:10:12.100 { 00:10:12.100 "dma_device_id": "system", 00:10:12.100 "dma_device_type": 1 00:10:12.100 }, 00:10:12.100 { 00:10:12.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.100 "dma_device_type": 2 00:10:12.100 } 00:10:12.100 ], 00:10:12.100 "driver_specific": {} 00:10:12.100 } 00:10:12.100 ] 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.100 "name": "Existed_Raid", 00:10:12.100 "uuid": "0d7899f4-3685-41d5-a05d-93af54ba74e3", 00:10:12.100 "strip_size_kb": 64, 00:10:12.100 "state": "configuring", 00:10:12.100 "raid_level": "raid0", 00:10:12.100 "superblock": true, 00:10:12.100 "num_base_bdevs": 3, 00:10:12.100 "num_base_bdevs_discovered": 1, 00:10:12.100 "num_base_bdevs_operational": 3, 00:10:12.100 "base_bdevs_list": [ 00:10:12.100 { 00:10:12.100 "name": "BaseBdev1", 00:10:12.100 "uuid": "8da4544d-cc59-4640-9fe4-69414ee6180e", 00:10:12.100 "is_configured": true, 00:10:12.100 "data_offset": 2048, 00:10:12.100 "data_size": 63488 00:10:12.100 }, 00:10:12.100 { 00:10:12.100 "name": "BaseBdev2", 00:10:12.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.100 "is_configured": false, 00:10:12.100 "data_offset": 0, 00:10:12.100 "data_size": 0 00:10:12.100 }, 00:10:12.100 { 00:10:12.100 "name": "BaseBdev3", 00:10:12.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.100 "is_configured": false, 00:10:12.100 "data_offset": 0, 00:10:12.100 "data_size": 0 00:10:12.100 } 00:10:12.100 ] 00:10:12.100 }' 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.100 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.359 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:12.359 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.359 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.619 [2024-12-06 04:01:05.714006] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:12.619 [2024-12-06 04:01:05.714080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:12.619 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.619 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:12.619 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.619 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.619 [2024-12-06 04:01:05.726031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.619 [2024-12-06 04:01:05.727965] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:12.619 [2024-12-06 04:01:05.728013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:12.619 [2024-12-06 04:01:05.728023] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:12.619 [2024-12-06 04:01:05.728032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:12.619 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.619 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:12.619 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:12.619 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:12.619 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.619 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.619 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.619 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.619 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.619 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.619 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.619 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.619 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.619 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.619 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.619 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.619 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.619 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.619 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.619 "name": "Existed_Raid", 00:10:12.619 "uuid": "80292db7-3a67-46b2-9985-c58818d4feb0", 00:10:12.619 "strip_size_kb": 64, 00:10:12.619 "state": "configuring", 00:10:12.619 "raid_level": "raid0", 00:10:12.619 "superblock": true, 00:10:12.619 "num_base_bdevs": 3, 00:10:12.619 "num_base_bdevs_discovered": 1, 00:10:12.620 "num_base_bdevs_operational": 3, 00:10:12.620 "base_bdevs_list": [ 00:10:12.620 { 00:10:12.620 "name": "BaseBdev1", 00:10:12.620 "uuid": "8da4544d-cc59-4640-9fe4-69414ee6180e", 00:10:12.620 "is_configured": true, 00:10:12.620 "data_offset": 2048, 00:10:12.620 "data_size": 63488 00:10:12.620 }, 00:10:12.620 { 00:10:12.620 "name": "BaseBdev2", 00:10:12.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.620 "is_configured": false, 00:10:12.620 "data_offset": 0, 00:10:12.620 "data_size": 0 00:10:12.620 }, 00:10:12.620 { 00:10:12.620 "name": "BaseBdev3", 00:10:12.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.620 "is_configured": false, 00:10:12.620 "data_offset": 0, 00:10:12.620 "data_size": 0 00:10:12.620 } 00:10:12.620 ] 00:10:12.620 }' 00:10:12.620 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.620 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.880 [2024-12-06 04:01:06.192566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.880 BaseBdev2 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.880 [ 00:10:12.880 { 00:10:12.880 "name": "BaseBdev2", 00:10:12.880 "aliases": [ 00:10:12.880 "ba800bb3-7424-42c6-9d66-ce61c007e0cd" 00:10:12.880 ], 00:10:12.880 "product_name": "Malloc disk", 00:10:12.880 "block_size": 512, 00:10:12.880 "num_blocks": 65536, 00:10:12.880 "uuid": "ba800bb3-7424-42c6-9d66-ce61c007e0cd", 00:10:12.880 "assigned_rate_limits": { 00:10:12.880 "rw_ios_per_sec": 0, 00:10:12.880 "rw_mbytes_per_sec": 0, 00:10:12.880 "r_mbytes_per_sec": 0, 00:10:12.880 "w_mbytes_per_sec": 0 00:10:12.880 }, 00:10:12.880 "claimed": true, 00:10:12.880 "claim_type": "exclusive_write", 00:10:12.880 "zoned": false, 00:10:12.880 "supported_io_types": { 00:10:12.880 "read": true, 00:10:12.880 "write": true, 00:10:12.880 "unmap": true, 00:10:12.880 "flush": true, 00:10:12.880 "reset": true, 00:10:12.880 "nvme_admin": false, 00:10:12.880 "nvme_io": false, 00:10:12.880 "nvme_io_md": false, 00:10:12.880 "write_zeroes": true, 00:10:12.880 "zcopy": true, 00:10:12.880 "get_zone_info": false, 00:10:12.880 "zone_management": false, 00:10:12.880 "zone_append": false, 00:10:12.880 "compare": false, 00:10:12.880 "compare_and_write": false, 00:10:12.880 "abort": true, 00:10:12.880 "seek_hole": false, 00:10:12.880 "seek_data": false, 00:10:12.880 "copy": true, 00:10:12.880 "nvme_iov_md": false 00:10:12.880 }, 00:10:12.880 "memory_domains": [ 00:10:12.880 { 00:10:12.880 "dma_device_id": "system", 00:10:12.880 "dma_device_type": 1 00:10:12.880 }, 00:10:12.880 { 00:10:12.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.880 "dma_device_type": 2 00:10:12.880 } 00:10:12.880 ], 00:10:12.880 "driver_specific": {} 00:10:12.880 } 00:10:12.880 ] 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.880 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.141 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.141 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.141 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.141 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.141 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.141 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.141 "name": "Existed_Raid", 00:10:13.141 "uuid": "80292db7-3a67-46b2-9985-c58818d4feb0", 00:10:13.141 "strip_size_kb": 64, 00:10:13.141 "state": "configuring", 00:10:13.141 "raid_level": "raid0", 00:10:13.141 "superblock": true, 00:10:13.141 "num_base_bdevs": 3, 00:10:13.141 "num_base_bdevs_discovered": 2, 00:10:13.141 "num_base_bdevs_operational": 3, 00:10:13.141 "base_bdevs_list": [ 00:10:13.141 { 00:10:13.141 "name": "BaseBdev1", 00:10:13.141 "uuid": "8da4544d-cc59-4640-9fe4-69414ee6180e", 00:10:13.141 "is_configured": true, 00:10:13.141 "data_offset": 2048, 00:10:13.141 "data_size": 63488 00:10:13.141 }, 00:10:13.141 { 00:10:13.141 "name": "BaseBdev2", 00:10:13.141 "uuid": "ba800bb3-7424-42c6-9d66-ce61c007e0cd", 00:10:13.141 "is_configured": true, 00:10:13.141 "data_offset": 2048, 00:10:13.141 "data_size": 63488 00:10:13.141 }, 00:10:13.141 { 00:10:13.141 "name": "BaseBdev3", 00:10:13.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.141 "is_configured": false, 00:10:13.141 "data_offset": 0, 00:10:13.141 "data_size": 0 00:10:13.141 } 00:10:13.141 ] 00:10:13.141 }' 00:10:13.141 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.141 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.400 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:13.400 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.400 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.659 [2024-12-06 04:01:06.773775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:13.659 [2024-12-06 04:01:06.774142] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:13.659 [2024-12-06 04:01:06.774207] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:13.659 [2024-12-06 04:01:06.774522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:13.659 [2024-12-06 04:01:06.774740] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:13.659 [2024-12-06 04:01:06.774786] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev3 00:10:13.659 id_bdev 0x617000007e80 00:10:13.659 [2024-12-06 04:01:06.775027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.659 [ 00:10:13.659 { 00:10:13.659 "name": "BaseBdev3", 00:10:13.659 "aliases": [ 00:10:13.659 "189000ab-0e4a-42b8-a2bf-5f7a95837c82" 00:10:13.659 ], 00:10:13.659 "product_name": "Malloc disk", 00:10:13.659 "block_size": 512, 00:10:13.659 "num_blocks": 65536, 00:10:13.659 "uuid": "189000ab-0e4a-42b8-a2bf-5f7a95837c82", 00:10:13.659 "assigned_rate_limits": { 00:10:13.659 "rw_ios_per_sec": 0, 00:10:13.659 "rw_mbytes_per_sec": 0, 00:10:13.659 "r_mbytes_per_sec": 0, 00:10:13.659 "w_mbytes_per_sec": 0 00:10:13.659 }, 00:10:13.659 "claimed": true, 00:10:13.659 "claim_type": "exclusive_write", 00:10:13.659 "zoned": false, 00:10:13.659 "supported_io_types": { 00:10:13.659 "read": true, 00:10:13.659 "write": true, 00:10:13.659 "unmap": true, 00:10:13.659 "flush": true, 00:10:13.659 "reset": true, 00:10:13.659 "nvme_admin": false, 00:10:13.659 "nvme_io": false, 00:10:13.659 "nvme_io_md": false, 00:10:13.659 "write_zeroes": true, 00:10:13.659 "zcopy": true, 00:10:13.659 "get_zone_info": false, 00:10:13.659 "zone_management": false, 00:10:13.659 "zone_append": false, 00:10:13.659 "compare": false, 00:10:13.659 "compare_and_write": false, 00:10:13.659 "abort": true, 00:10:13.659 "seek_hole": false, 00:10:13.659 "seek_data": false, 00:10:13.659 "copy": true, 00:10:13.659 "nvme_iov_md": false 00:10:13.659 }, 00:10:13.659 "memory_domains": [ 00:10:13.659 { 00:10:13.659 "dma_device_id": "system", 00:10:13.659 "dma_device_type": 1 00:10:13.659 }, 00:10:13.659 { 00:10:13.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.659 "dma_device_type": 2 00:10:13.659 } 00:10:13.659 ], 00:10:13.659 "driver_specific": {} 00:10:13.659 } 00:10:13.659 ] 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.659 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.660 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.660 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.660 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.660 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.660 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.660 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.660 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.660 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.660 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.660 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.660 "name": "Existed_Raid", 00:10:13.660 "uuid": "80292db7-3a67-46b2-9985-c58818d4feb0", 00:10:13.660 "strip_size_kb": 64, 00:10:13.660 "state": "online", 00:10:13.660 "raid_level": "raid0", 00:10:13.660 "superblock": true, 00:10:13.660 "num_base_bdevs": 3, 00:10:13.660 "num_base_bdevs_discovered": 3, 00:10:13.660 "num_base_bdevs_operational": 3, 00:10:13.660 "base_bdevs_list": [ 00:10:13.660 { 00:10:13.660 "name": "BaseBdev1", 00:10:13.660 "uuid": "8da4544d-cc59-4640-9fe4-69414ee6180e", 00:10:13.660 "is_configured": true, 00:10:13.660 "data_offset": 2048, 00:10:13.660 "data_size": 63488 00:10:13.660 }, 00:10:13.660 { 00:10:13.660 "name": "BaseBdev2", 00:10:13.660 "uuid": "ba800bb3-7424-42c6-9d66-ce61c007e0cd", 00:10:13.660 "is_configured": true, 00:10:13.660 "data_offset": 2048, 00:10:13.660 "data_size": 63488 00:10:13.660 }, 00:10:13.660 { 00:10:13.660 "name": "BaseBdev3", 00:10:13.660 "uuid": "189000ab-0e4a-42b8-a2bf-5f7a95837c82", 00:10:13.660 "is_configured": true, 00:10:13.660 "data_offset": 2048, 00:10:13.660 "data_size": 63488 00:10:13.660 } 00:10:13.660 ] 00:10:13.660 }' 00:10:13.660 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.660 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.919 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:13.919 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:13.919 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.919 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.919 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.919 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.919 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:13.919 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.919 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.919 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:14.177 [2024-12-06 04:01:07.273391] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.177 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.177 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:14.177 "name": "Existed_Raid", 00:10:14.177 "aliases": [ 00:10:14.177 "80292db7-3a67-46b2-9985-c58818d4feb0" 00:10:14.177 ], 00:10:14.177 "product_name": "Raid Volume", 00:10:14.177 "block_size": 512, 00:10:14.177 "num_blocks": 190464, 00:10:14.177 "uuid": "80292db7-3a67-46b2-9985-c58818d4feb0", 00:10:14.177 "assigned_rate_limits": { 00:10:14.177 "rw_ios_per_sec": 0, 00:10:14.177 "rw_mbytes_per_sec": 0, 00:10:14.177 "r_mbytes_per_sec": 0, 00:10:14.177 "w_mbytes_per_sec": 0 00:10:14.177 }, 00:10:14.177 "claimed": false, 00:10:14.177 "zoned": false, 00:10:14.177 "supported_io_types": { 00:10:14.177 "read": true, 00:10:14.177 "write": true, 00:10:14.177 "unmap": true, 00:10:14.177 "flush": true, 00:10:14.177 "reset": true, 00:10:14.177 "nvme_admin": false, 00:10:14.177 "nvme_io": false, 00:10:14.177 "nvme_io_md": false, 00:10:14.177 "write_zeroes": true, 00:10:14.177 "zcopy": false, 00:10:14.177 "get_zone_info": false, 00:10:14.177 "zone_management": false, 00:10:14.177 "zone_append": false, 00:10:14.177 "compare": false, 00:10:14.177 "compare_and_write": false, 00:10:14.177 "abort": false, 00:10:14.177 "seek_hole": false, 00:10:14.177 "seek_data": false, 00:10:14.177 "copy": false, 00:10:14.177 "nvme_iov_md": false 00:10:14.177 }, 00:10:14.177 "memory_domains": [ 00:10:14.177 { 00:10:14.177 "dma_device_id": "system", 00:10:14.177 "dma_device_type": 1 00:10:14.177 }, 00:10:14.177 { 00:10:14.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.177 "dma_device_type": 2 00:10:14.177 }, 00:10:14.177 { 00:10:14.177 "dma_device_id": "system", 00:10:14.177 "dma_device_type": 1 00:10:14.177 }, 00:10:14.177 { 00:10:14.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.177 "dma_device_type": 2 00:10:14.177 }, 00:10:14.177 { 00:10:14.177 "dma_device_id": "system", 00:10:14.177 "dma_device_type": 1 00:10:14.177 }, 00:10:14.177 { 00:10:14.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.177 "dma_device_type": 2 00:10:14.177 } 00:10:14.177 ], 00:10:14.177 "driver_specific": { 00:10:14.177 "raid": { 00:10:14.178 "uuid": "80292db7-3a67-46b2-9985-c58818d4feb0", 00:10:14.178 "strip_size_kb": 64, 00:10:14.178 "state": "online", 00:10:14.178 "raid_level": "raid0", 00:10:14.178 "superblock": true, 00:10:14.178 "num_base_bdevs": 3, 00:10:14.178 "num_base_bdevs_discovered": 3, 00:10:14.178 "num_base_bdevs_operational": 3, 00:10:14.178 "base_bdevs_list": [ 00:10:14.178 { 00:10:14.178 "name": "BaseBdev1", 00:10:14.178 "uuid": "8da4544d-cc59-4640-9fe4-69414ee6180e", 00:10:14.178 "is_configured": true, 00:10:14.178 "data_offset": 2048, 00:10:14.178 "data_size": 63488 00:10:14.178 }, 00:10:14.178 { 00:10:14.178 "name": "BaseBdev2", 00:10:14.178 "uuid": "ba800bb3-7424-42c6-9d66-ce61c007e0cd", 00:10:14.178 "is_configured": true, 00:10:14.178 "data_offset": 2048, 00:10:14.178 "data_size": 63488 00:10:14.178 }, 00:10:14.178 { 00:10:14.178 "name": "BaseBdev3", 00:10:14.178 "uuid": "189000ab-0e4a-42b8-a2bf-5f7a95837c82", 00:10:14.178 "is_configured": true, 00:10:14.178 "data_offset": 2048, 00:10:14.178 "data_size": 63488 00:10:14.178 } 00:10:14.178 ] 00:10:14.178 } 00:10:14.178 } 00:10:14.178 }' 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:14.178 BaseBdev2 00:10:14.178 BaseBdev3' 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.178 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.438 [2024-12-06 04:01:07.556584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:14.438 [2024-12-06 04:01:07.556614] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.438 [2024-12-06 04:01:07.556670] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.438 "name": "Existed_Raid", 00:10:14.438 "uuid": "80292db7-3a67-46b2-9985-c58818d4feb0", 00:10:14.438 "strip_size_kb": 64, 00:10:14.438 "state": "offline", 00:10:14.438 "raid_level": "raid0", 00:10:14.438 "superblock": true, 00:10:14.438 "num_base_bdevs": 3, 00:10:14.438 "num_base_bdevs_discovered": 2, 00:10:14.438 "num_base_bdevs_operational": 2, 00:10:14.438 "base_bdevs_list": [ 00:10:14.438 { 00:10:14.438 "name": null, 00:10:14.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.438 "is_configured": false, 00:10:14.438 "data_offset": 0, 00:10:14.438 "data_size": 63488 00:10:14.438 }, 00:10:14.438 { 00:10:14.438 "name": "BaseBdev2", 00:10:14.438 "uuid": "ba800bb3-7424-42c6-9d66-ce61c007e0cd", 00:10:14.438 "is_configured": true, 00:10:14.438 "data_offset": 2048, 00:10:14.438 "data_size": 63488 00:10:14.438 }, 00:10:14.438 { 00:10:14.438 "name": "BaseBdev3", 00:10:14.438 "uuid": "189000ab-0e4a-42b8-a2bf-5f7a95837c82", 00:10:14.438 "is_configured": true, 00:10:14.438 "data_offset": 2048, 00:10:14.438 "data_size": 63488 00:10:14.438 } 00:10:14.438 ] 00:10:14.438 }' 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.438 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.009 [2024-12-06 04:01:08.141926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.009 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.009 [2024-12-06 04:01:08.301549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:15.009 [2024-12-06 04:01:08.301648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.270 BaseBdev2 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.270 [ 00:10:15.270 { 00:10:15.270 "name": "BaseBdev2", 00:10:15.270 "aliases": [ 00:10:15.270 "f2a675de-198d-4697-ab33-77734661c0f3" 00:10:15.270 ], 00:10:15.270 "product_name": "Malloc disk", 00:10:15.270 "block_size": 512, 00:10:15.270 "num_blocks": 65536, 00:10:15.270 "uuid": "f2a675de-198d-4697-ab33-77734661c0f3", 00:10:15.270 "assigned_rate_limits": { 00:10:15.270 "rw_ios_per_sec": 0, 00:10:15.270 "rw_mbytes_per_sec": 0, 00:10:15.270 "r_mbytes_per_sec": 0, 00:10:15.270 "w_mbytes_per_sec": 0 00:10:15.270 }, 00:10:15.270 "claimed": false, 00:10:15.270 "zoned": false, 00:10:15.270 "supported_io_types": { 00:10:15.270 "read": true, 00:10:15.270 "write": true, 00:10:15.270 "unmap": true, 00:10:15.270 "flush": true, 00:10:15.270 "reset": true, 00:10:15.270 "nvme_admin": false, 00:10:15.270 "nvme_io": false, 00:10:15.270 "nvme_io_md": false, 00:10:15.270 "write_zeroes": true, 00:10:15.270 "zcopy": true, 00:10:15.270 "get_zone_info": false, 00:10:15.270 "zone_management": false, 00:10:15.270 "zone_append": false, 00:10:15.270 "compare": false, 00:10:15.270 "compare_and_write": false, 00:10:15.270 "abort": true, 00:10:15.270 "seek_hole": false, 00:10:15.270 "seek_data": false, 00:10:15.270 "copy": true, 00:10:15.270 "nvme_iov_md": false 00:10:15.270 }, 00:10:15.270 "memory_domains": [ 00:10:15.270 { 00:10:15.270 "dma_device_id": "system", 00:10:15.270 "dma_device_type": 1 00:10:15.270 }, 00:10:15.270 { 00:10:15.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.270 "dma_device_type": 2 00:10:15.270 } 00:10:15.270 ], 00:10:15.270 "driver_specific": {} 00:10:15.270 } 00:10:15.270 ] 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.270 BaseBdev3 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.270 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.270 [ 00:10:15.270 { 00:10:15.270 "name": "BaseBdev3", 00:10:15.270 "aliases": [ 00:10:15.270 "1811118e-5461-4004-bca2-b705a9500cbb" 00:10:15.270 ], 00:10:15.270 "product_name": "Malloc disk", 00:10:15.270 "block_size": 512, 00:10:15.270 "num_blocks": 65536, 00:10:15.270 "uuid": "1811118e-5461-4004-bca2-b705a9500cbb", 00:10:15.270 "assigned_rate_limits": { 00:10:15.270 "rw_ios_per_sec": 0, 00:10:15.270 "rw_mbytes_per_sec": 0, 00:10:15.270 "r_mbytes_per_sec": 0, 00:10:15.270 "w_mbytes_per_sec": 0 00:10:15.270 }, 00:10:15.270 "claimed": false, 00:10:15.270 "zoned": false, 00:10:15.270 "supported_io_types": { 00:10:15.270 "read": true, 00:10:15.270 "write": true, 00:10:15.270 "unmap": true, 00:10:15.270 "flush": true, 00:10:15.270 "reset": true, 00:10:15.270 "nvme_admin": false, 00:10:15.270 "nvme_io": false, 00:10:15.270 "nvme_io_md": false, 00:10:15.270 "write_zeroes": true, 00:10:15.270 "zcopy": true, 00:10:15.270 "get_zone_info": false, 00:10:15.270 "zone_management": false, 00:10:15.270 "zone_append": false, 00:10:15.270 "compare": false, 00:10:15.270 "compare_and_write": false, 00:10:15.270 "abort": true, 00:10:15.270 "seek_hole": false, 00:10:15.270 "seek_data": false, 00:10:15.270 "copy": true, 00:10:15.270 "nvme_iov_md": false 00:10:15.270 }, 00:10:15.270 "memory_domains": [ 00:10:15.270 { 00:10:15.270 "dma_device_id": "system", 00:10:15.270 "dma_device_type": 1 00:10:15.270 }, 00:10:15.270 { 00:10:15.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.270 "dma_device_type": 2 00:10:15.270 } 00:10:15.270 ], 00:10:15.270 "driver_specific": {} 00:10:15.270 } 00:10:15.270 ] 00:10:15.271 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.271 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:15.271 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:15.271 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:15.271 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:15.271 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.271 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.271 [2024-12-06 04:01:08.611816] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:15.271 [2024-12-06 04:01:08.611863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:15.271 [2024-12-06 04:01:08.611883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.271 [2024-12-06 04:01:08.613743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.271 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.271 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:15.271 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.271 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.271 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.271 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.271 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.271 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.271 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.271 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.271 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.531 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.531 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.531 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.531 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.531 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.531 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.531 "name": "Existed_Raid", 00:10:15.531 "uuid": "82ac14be-afda-4bd5-9951-39fcbcd28562", 00:10:15.531 "strip_size_kb": 64, 00:10:15.531 "state": "configuring", 00:10:15.531 "raid_level": "raid0", 00:10:15.531 "superblock": true, 00:10:15.531 "num_base_bdevs": 3, 00:10:15.531 "num_base_bdevs_discovered": 2, 00:10:15.531 "num_base_bdevs_operational": 3, 00:10:15.531 "base_bdevs_list": [ 00:10:15.531 { 00:10:15.531 "name": "BaseBdev1", 00:10:15.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.531 "is_configured": false, 00:10:15.531 "data_offset": 0, 00:10:15.531 "data_size": 0 00:10:15.531 }, 00:10:15.531 { 00:10:15.531 "name": "BaseBdev2", 00:10:15.531 "uuid": "f2a675de-198d-4697-ab33-77734661c0f3", 00:10:15.531 "is_configured": true, 00:10:15.531 "data_offset": 2048, 00:10:15.531 "data_size": 63488 00:10:15.531 }, 00:10:15.531 { 00:10:15.531 "name": "BaseBdev3", 00:10:15.531 "uuid": "1811118e-5461-4004-bca2-b705a9500cbb", 00:10:15.531 "is_configured": true, 00:10:15.531 "data_offset": 2048, 00:10:15.531 "data_size": 63488 00:10:15.531 } 00:10:15.531 ] 00:10:15.531 }' 00:10:15.531 04:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.531 04:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.791 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:15.791 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.791 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.791 [2024-12-06 04:01:09.051173] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:15.791 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.791 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:15.791 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.791 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.791 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.791 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.791 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.791 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.791 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.791 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.791 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.791 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.791 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.791 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.791 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.791 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.791 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.791 "name": "Existed_Raid", 00:10:15.791 "uuid": "82ac14be-afda-4bd5-9951-39fcbcd28562", 00:10:15.791 "strip_size_kb": 64, 00:10:15.791 "state": "configuring", 00:10:15.791 "raid_level": "raid0", 00:10:15.791 "superblock": true, 00:10:15.791 "num_base_bdevs": 3, 00:10:15.791 "num_base_bdevs_discovered": 1, 00:10:15.791 "num_base_bdevs_operational": 3, 00:10:15.791 "base_bdevs_list": [ 00:10:15.791 { 00:10:15.791 "name": "BaseBdev1", 00:10:15.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.791 "is_configured": false, 00:10:15.791 "data_offset": 0, 00:10:15.791 "data_size": 0 00:10:15.791 }, 00:10:15.791 { 00:10:15.791 "name": null, 00:10:15.791 "uuid": "f2a675de-198d-4697-ab33-77734661c0f3", 00:10:15.791 "is_configured": false, 00:10:15.791 "data_offset": 0, 00:10:15.791 "data_size": 63488 00:10:15.791 }, 00:10:15.791 { 00:10:15.791 "name": "BaseBdev3", 00:10:15.791 "uuid": "1811118e-5461-4004-bca2-b705a9500cbb", 00:10:15.791 "is_configured": true, 00:10:15.791 "data_offset": 2048, 00:10:15.791 "data_size": 63488 00:10:15.791 } 00:10:15.791 ] 00:10:15.791 }' 00:10:15.792 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.792 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.360 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.360 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.360 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.360 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.361 [2024-12-06 04:01:09.538650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.361 BaseBdev1 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.361 [ 00:10:16.361 { 00:10:16.361 "name": "BaseBdev1", 00:10:16.361 "aliases": [ 00:10:16.361 "817d4d0a-d41e-4f8e-870b-df5ee1bab7e4" 00:10:16.361 ], 00:10:16.361 "product_name": "Malloc disk", 00:10:16.361 "block_size": 512, 00:10:16.361 "num_blocks": 65536, 00:10:16.361 "uuid": "817d4d0a-d41e-4f8e-870b-df5ee1bab7e4", 00:10:16.361 "assigned_rate_limits": { 00:10:16.361 "rw_ios_per_sec": 0, 00:10:16.361 "rw_mbytes_per_sec": 0, 00:10:16.361 "r_mbytes_per_sec": 0, 00:10:16.361 "w_mbytes_per_sec": 0 00:10:16.361 }, 00:10:16.361 "claimed": true, 00:10:16.361 "claim_type": "exclusive_write", 00:10:16.361 "zoned": false, 00:10:16.361 "supported_io_types": { 00:10:16.361 "read": true, 00:10:16.361 "write": true, 00:10:16.361 "unmap": true, 00:10:16.361 "flush": true, 00:10:16.361 "reset": true, 00:10:16.361 "nvme_admin": false, 00:10:16.361 "nvme_io": false, 00:10:16.361 "nvme_io_md": false, 00:10:16.361 "write_zeroes": true, 00:10:16.361 "zcopy": true, 00:10:16.361 "get_zone_info": false, 00:10:16.361 "zone_management": false, 00:10:16.361 "zone_append": false, 00:10:16.361 "compare": false, 00:10:16.361 "compare_and_write": false, 00:10:16.361 "abort": true, 00:10:16.361 "seek_hole": false, 00:10:16.361 "seek_data": false, 00:10:16.361 "copy": true, 00:10:16.361 "nvme_iov_md": false 00:10:16.361 }, 00:10:16.361 "memory_domains": [ 00:10:16.361 { 00:10:16.361 "dma_device_id": "system", 00:10:16.361 "dma_device_type": 1 00:10:16.361 }, 00:10:16.361 { 00:10:16.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.361 "dma_device_type": 2 00:10:16.361 } 00:10:16.361 ], 00:10:16.361 "driver_specific": {} 00:10:16.361 } 00:10:16.361 ] 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.361 "name": "Existed_Raid", 00:10:16.361 "uuid": "82ac14be-afda-4bd5-9951-39fcbcd28562", 00:10:16.361 "strip_size_kb": 64, 00:10:16.361 "state": "configuring", 00:10:16.361 "raid_level": "raid0", 00:10:16.361 "superblock": true, 00:10:16.361 "num_base_bdevs": 3, 00:10:16.361 "num_base_bdevs_discovered": 2, 00:10:16.361 "num_base_bdevs_operational": 3, 00:10:16.361 "base_bdevs_list": [ 00:10:16.361 { 00:10:16.361 "name": "BaseBdev1", 00:10:16.361 "uuid": "817d4d0a-d41e-4f8e-870b-df5ee1bab7e4", 00:10:16.361 "is_configured": true, 00:10:16.361 "data_offset": 2048, 00:10:16.361 "data_size": 63488 00:10:16.361 }, 00:10:16.361 { 00:10:16.361 "name": null, 00:10:16.361 "uuid": "f2a675de-198d-4697-ab33-77734661c0f3", 00:10:16.361 "is_configured": false, 00:10:16.361 "data_offset": 0, 00:10:16.361 "data_size": 63488 00:10:16.361 }, 00:10:16.361 { 00:10:16.361 "name": "BaseBdev3", 00:10:16.361 "uuid": "1811118e-5461-4004-bca2-b705a9500cbb", 00:10:16.361 "is_configured": true, 00:10:16.361 "data_offset": 2048, 00:10:16.361 "data_size": 63488 00:10:16.361 } 00:10:16.361 ] 00:10:16.361 }' 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.361 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.930 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.930 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:16.930 04:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.930 04:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.930 04:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.930 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:16.930 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:16.930 04:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.930 04:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.930 [2024-12-06 04:01:10.069797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:16.930 04:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.930 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:16.931 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.931 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.931 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.931 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.931 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.931 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.931 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.931 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.931 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.931 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.931 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.931 04:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.931 04:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.931 04:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.931 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.931 "name": "Existed_Raid", 00:10:16.931 "uuid": "82ac14be-afda-4bd5-9951-39fcbcd28562", 00:10:16.931 "strip_size_kb": 64, 00:10:16.931 "state": "configuring", 00:10:16.931 "raid_level": "raid0", 00:10:16.931 "superblock": true, 00:10:16.931 "num_base_bdevs": 3, 00:10:16.931 "num_base_bdevs_discovered": 1, 00:10:16.931 "num_base_bdevs_operational": 3, 00:10:16.931 "base_bdevs_list": [ 00:10:16.931 { 00:10:16.931 "name": "BaseBdev1", 00:10:16.931 "uuid": "817d4d0a-d41e-4f8e-870b-df5ee1bab7e4", 00:10:16.931 "is_configured": true, 00:10:16.931 "data_offset": 2048, 00:10:16.931 "data_size": 63488 00:10:16.931 }, 00:10:16.931 { 00:10:16.931 "name": null, 00:10:16.931 "uuid": "f2a675de-198d-4697-ab33-77734661c0f3", 00:10:16.931 "is_configured": false, 00:10:16.931 "data_offset": 0, 00:10:16.931 "data_size": 63488 00:10:16.931 }, 00:10:16.931 { 00:10:16.931 "name": null, 00:10:16.931 "uuid": "1811118e-5461-4004-bca2-b705a9500cbb", 00:10:16.931 "is_configured": false, 00:10:16.931 "data_offset": 0, 00:10:16.931 "data_size": 63488 00:10:16.931 } 00:10:16.931 ] 00:10:16.931 }' 00:10:16.931 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.931 04:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.190 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.191 04:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.191 04:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.191 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:17.191 04:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.450 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:17.450 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:17.450 04:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.450 04:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.450 [2024-12-06 04:01:10.565031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.450 04:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.450 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:17.450 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.450 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.450 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.450 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.450 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.450 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.450 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.450 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.450 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.450 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.450 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.450 04:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.450 04:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.450 04:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.450 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.450 "name": "Existed_Raid", 00:10:17.450 "uuid": "82ac14be-afda-4bd5-9951-39fcbcd28562", 00:10:17.450 "strip_size_kb": 64, 00:10:17.450 "state": "configuring", 00:10:17.450 "raid_level": "raid0", 00:10:17.450 "superblock": true, 00:10:17.450 "num_base_bdevs": 3, 00:10:17.450 "num_base_bdevs_discovered": 2, 00:10:17.450 "num_base_bdevs_operational": 3, 00:10:17.450 "base_bdevs_list": [ 00:10:17.450 { 00:10:17.450 "name": "BaseBdev1", 00:10:17.450 "uuid": "817d4d0a-d41e-4f8e-870b-df5ee1bab7e4", 00:10:17.450 "is_configured": true, 00:10:17.450 "data_offset": 2048, 00:10:17.450 "data_size": 63488 00:10:17.450 }, 00:10:17.450 { 00:10:17.450 "name": null, 00:10:17.450 "uuid": "f2a675de-198d-4697-ab33-77734661c0f3", 00:10:17.450 "is_configured": false, 00:10:17.450 "data_offset": 0, 00:10:17.450 "data_size": 63488 00:10:17.450 }, 00:10:17.450 { 00:10:17.450 "name": "BaseBdev3", 00:10:17.450 "uuid": "1811118e-5461-4004-bca2-b705a9500cbb", 00:10:17.450 "is_configured": true, 00:10:17.450 "data_offset": 2048, 00:10:17.450 "data_size": 63488 00:10:17.450 } 00:10:17.450 ] 00:10:17.450 }' 00:10:17.450 04:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.450 04:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.710 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.710 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:17.710 04:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.710 04:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.710 04:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.710 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:17.710 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:17.710 04:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.710 04:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.710 [2024-12-06 04:01:11.056270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:17.970 04:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.970 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:17.970 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.970 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.970 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.970 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.970 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.970 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.970 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.970 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.970 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.970 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.970 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.970 04:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.970 04:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.970 04:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.970 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.970 "name": "Existed_Raid", 00:10:17.970 "uuid": "82ac14be-afda-4bd5-9951-39fcbcd28562", 00:10:17.970 "strip_size_kb": 64, 00:10:17.970 "state": "configuring", 00:10:17.970 "raid_level": "raid0", 00:10:17.970 "superblock": true, 00:10:17.970 "num_base_bdevs": 3, 00:10:17.970 "num_base_bdevs_discovered": 1, 00:10:17.970 "num_base_bdevs_operational": 3, 00:10:17.970 "base_bdevs_list": [ 00:10:17.970 { 00:10:17.970 "name": null, 00:10:17.970 "uuid": "817d4d0a-d41e-4f8e-870b-df5ee1bab7e4", 00:10:17.970 "is_configured": false, 00:10:17.970 "data_offset": 0, 00:10:17.970 "data_size": 63488 00:10:17.970 }, 00:10:17.970 { 00:10:17.970 "name": null, 00:10:17.970 "uuid": "f2a675de-198d-4697-ab33-77734661c0f3", 00:10:17.970 "is_configured": false, 00:10:17.970 "data_offset": 0, 00:10:17.970 "data_size": 63488 00:10:17.970 }, 00:10:17.970 { 00:10:17.970 "name": "BaseBdev3", 00:10:17.970 "uuid": "1811118e-5461-4004-bca2-b705a9500cbb", 00:10:17.970 "is_configured": true, 00:10:17.970 "data_offset": 2048, 00:10:17.970 "data_size": 63488 00:10:17.970 } 00:10:17.970 ] 00:10:17.970 }' 00:10:17.970 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.970 04:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.539 [2024-12-06 04:01:11.634327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.539 "name": "Existed_Raid", 00:10:18.539 "uuid": "82ac14be-afda-4bd5-9951-39fcbcd28562", 00:10:18.539 "strip_size_kb": 64, 00:10:18.539 "state": "configuring", 00:10:18.539 "raid_level": "raid0", 00:10:18.539 "superblock": true, 00:10:18.539 "num_base_bdevs": 3, 00:10:18.539 "num_base_bdevs_discovered": 2, 00:10:18.539 "num_base_bdevs_operational": 3, 00:10:18.539 "base_bdevs_list": [ 00:10:18.539 { 00:10:18.539 "name": null, 00:10:18.539 "uuid": "817d4d0a-d41e-4f8e-870b-df5ee1bab7e4", 00:10:18.539 "is_configured": false, 00:10:18.539 "data_offset": 0, 00:10:18.539 "data_size": 63488 00:10:18.539 }, 00:10:18.539 { 00:10:18.539 "name": "BaseBdev2", 00:10:18.539 "uuid": "f2a675de-198d-4697-ab33-77734661c0f3", 00:10:18.539 "is_configured": true, 00:10:18.539 "data_offset": 2048, 00:10:18.539 "data_size": 63488 00:10:18.539 }, 00:10:18.539 { 00:10:18.539 "name": "BaseBdev3", 00:10:18.539 "uuid": "1811118e-5461-4004-bca2-b705a9500cbb", 00:10:18.539 "is_configured": true, 00:10:18.539 "data_offset": 2048, 00:10:18.539 "data_size": 63488 00:10:18.539 } 00:10:18.539 ] 00:10:18.539 }' 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.539 04:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.799 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.799 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.799 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.799 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:18.799 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.799 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:18.799 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.799 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:18.799 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.799 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.799 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 817d4d0a-d41e-4f8e-870b-df5ee1bab7e4 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.075 [2024-12-06 04:01:12.208011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:19.075 [2024-12-06 04:01:12.208297] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:19.075 [2024-12-06 04:01:12.208317] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:19.075 [2024-12-06 04:01:12.208602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:19.075 [2024-12-06 04:01:12.208776] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:19.075 [2024-12-06 04:01:12.208787] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:19.075 NewBaseBdev 00:10:19.075 [2024-12-06 04:01:12.208939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.075 [ 00:10:19.075 { 00:10:19.075 "name": "NewBaseBdev", 00:10:19.075 "aliases": [ 00:10:19.075 "817d4d0a-d41e-4f8e-870b-df5ee1bab7e4" 00:10:19.075 ], 00:10:19.075 "product_name": "Malloc disk", 00:10:19.075 "block_size": 512, 00:10:19.075 "num_blocks": 65536, 00:10:19.075 "uuid": "817d4d0a-d41e-4f8e-870b-df5ee1bab7e4", 00:10:19.075 "assigned_rate_limits": { 00:10:19.075 "rw_ios_per_sec": 0, 00:10:19.075 "rw_mbytes_per_sec": 0, 00:10:19.075 "r_mbytes_per_sec": 0, 00:10:19.075 "w_mbytes_per_sec": 0 00:10:19.075 }, 00:10:19.075 "claimed": true, 00:10:19.075 "claim_type": "exclusive_write", 00:10:19.075 "zoned": false, 00:10:19.075 "supported_io_types": { 00:10:19.075 "read": true, 00:10:19.075 "write": true, 00:10:19.075 "unmap": true, 00:10:19.075 "flush": true, 00:10:19.075 "reset": true, 00:10:19.075 "nvme_admin": false, 00:10:19.075 "nvme_io": false, 00:10:19.075 "nvme_io_md": false, 00:10:19.075 "write_zeroes": true, 00:10:19.075 "zcopy": true, 00:10:19.075 "get_zone_info": false, 00:10:19.075 "zone_management": false, 00:10:19.075 "zone_append": false, 00:10:19.075 "compare": false, 00:10:19.075 "compare_and_write": false, 00:10:19.075 "abort": true, 00:10:19.075 "seek_hole": false, 00:10:19.075 "seek_data": false, 00:10:19.075 "copy": true, 00:10:19.075 "nvme_iov_md": false 00:10:19.075 }, 00:10:19.075 "memory_domains": [ 00:10:19.075 { 00:10:19.075 "dma_device_id": "system", 00:10:19.075 "dma_device_type": 1 00:10:19.075 }, 00:10:19.075 { 00:10:19.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.075 "dma_device_type": 2 00:10:19.075 } 00:10:19.075 ], 00:10:19.075 "driver_specific": {} 00:10:19.075 } 00:10:19.075 ] 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.075 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.075 "name": "Existed_Raid", 00:10:19.075 "uuid": "82ac14be-afda-4bd5-9951-39fcbcd28562", 00:10:19.075 "strip_size_kb": 64, 00:10:19.075 "state": "online", 00:10:19.075 "raid_level": "raid0", 00:10:19.075 "superblock": true, 00:10:19.075 "num_base_bdevs": 3, 00:10:19.075 "num_base_bdevs_discovered": 3, 00:10:19.075 "num_base_bdevs_operational": 3, 00:10:19.075 "base_bdevs_list": [ 00:10:19.075 { 00:10:19.075 "name": "NewBaseBdev", 00:10:19.075 "uuid": "817d4d0a-d41e-4f8e-870b-df5ee1bab7e4", 00:10:19.075 "is_configured": true, 00:10:19.075 "data_offset": 2048, 00:10:19.075 "data_size": 63488 00:10:19.075 }, 00:10:19.075 { 00:10:19.075 "name": "BaseBdev2", 00:10:19.075 "uuid": "f2a675de-198d-4697-ab33-77734661c0f3", 00:10:19.075 "is_configured": true, 00:10:19.075 "data_offset": 2048, 00:10:19.075 "data_size": 63488 00:10:19.075 }, 00:10:19.075 { 00:10:19.075 "name": "BaseBdev3", 00:10:19.076 "uuid": "1811118e-5461-4004-bca2-b705a9500cbb", 00:10:19.076 "is_configured": true, 00:10:19.076 "data_offset": 2048, 00:10:19.076 "data_size": 63488 00:10:19.076 } 00:10:19.076 ] 00:10:19.076 }' 00:10:19.076 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.076 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.643 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:19.643 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:19.643 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:19.643 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:19.643 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:19.643 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:19.643 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:19.643 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:19.643 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.643 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.643 [2024-12-06 04:01:12.743518] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.643 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.643 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:19.643 "name": "Existed_Raid", 00:10:19.643 "aliases": [ 00:10:19.643 "82ac14be-afda-4bd5-9951-39fcbcd28562" 00:10:19.643 ], 00:10:19.643 "product_name": "Raid Volume", 00:10:19.643 "block_size": 512, 00:10:19.643 "num_blocks": 190464, 00:10:19.643 "uuid": "82ac14be-afda-4bd5-9951-39fcbcd28562", 00:10:19.643 "assigned_rate_limits": { 00:10:19.643 "rw_ios_per_sec": 0, 00:10:19.643 "rw_mbytes_per_sec": 0, 00:10:19.643 "r_mbytes_per_sec": 0, 00:10:19.643 "w_mbytes_per_sec": 0 00:10:19.643 }, 00:10:19.643 "claimed": false, 00:10:19.643 "zoned": false, 00:10:19.643 "supported_io_types": { 00:10:19.643 "read": true, 00:10:19.643 "write": true, 00:10:19.643 "unmap": true, 00:10:19.643 "flush": true, 00:10:19.643 "reset": true, 00:10:19.643 "nvme_admin": false, 00:10:19.643 "nvme_io": false, 00:10:19.643 "nvme_io_md": false, 00:10:19.643 "write_zeroes": true, 00:10:19.643 "zcopy": false, 00:10:19.643 "get_zone_info": false, 00:10:19.643 "zone_management": false, 00:10:19.643 "zone_append": false, 00:10:19.643 "compare": false, 00:10:19.643 "compare_and_write": false, 00:10:19.643 "abort": false, 00:10:19.643 "seek_hole": false, 00:10:19.643 "seek_data": false, 00:10:19.643 "copy": false, 00:10:19.643 "nvme_iov_md": false 00:10:19.643 }, 00:10:19.643 "memory_domains": [ 00:10:19.643 { 00:10:19.643 "dma_device_id": "system", 00:10:19.643 "dma_device_type": 1 00:10:19.643 }, 00:10:19.643 { 00:10:19.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.643 "dma_device_type": 2 00:10:19.643 }, 00:10:19.643 { 00:10:19.643 "dma_device_id": "system", 00:10:19.643 "dma_device_type": 1 00:10:19.643 }, 00:10:19.643 { 00:10:19.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.644 "dma_device_type": 2 00:10:19.644 }, 00:10:19.644 { 00:10:19.644 "dma_device_id": "system", 00:10:19.644 "dma_device_type": 1 00:10:19.644 }, 00:10:19.644 { 00:10:19.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.644 "dma_device_type": 2 00:10:19.644 } 00:10:19.644 ], 00:10:19.644 "driver_specific": { 00:10:19.644 "raid": { 00:10:19.644 "uuid": "82ac14be-afda-4bd5-9951-39fcbcd28562", 00:10:19.644 "strip_size_kb": 64, 00:10:19.644 "state": "online", 00:10:19.644 "raid_level": "raid0", 00:10:19.644 "superblock": true, 00:10:19.644 "num_base_bdevs": 3, 00:10:19.644 "num_base_bdevs_discovered": 3, 00:10:19.644 "num_base_bdevs_operational": 3, 00:10:19.644 "base_bdevs_list": [ 00:10:19.644 { 00:10:19.644 "name": "NewBaseBdev", 00:10:19.644 "uuid": "817d4d0a-d41e-4f8e-870b-df5ee1bab7e4", 00:10:19.644 "is_configured": true, 00:10:19.644 "data_offset": 2048, 00:10:19.644 "data_size": 63488 00:10:19.644 }, 00:10:19.644 { 00:10:19.644 "name": "BaseBdev2", 00:10:19.644 "uuid": "f2a675de-198d-4697-ab33-77734661c0f3", 00:10:19.644 "is_configured": true, 00:10:19.644 "data_offset": 2048, 00:10:19.644 "data_size": 63488 00:10:19.644 }, 00:10:19.644 { 00:10:19.644 "name": "BaseBdev3", 00:10:19.644 "uuid": "1811118e-5461-4004-bca2-b705a9500cbb", 00:10:19.644 "is_configured": true, 00:10:19.644 "data_offset": 2048, 00:10:19.644 "data_size": 63488 00:10:19.644 } 00:10:19.644 ] 00:10:19.644 } 00:10:19.644 } 00:10:19.644 }' 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:19.644 BaseBdev2 00:10:19.644 BaseBdev3' 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.644 04:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.903 04:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.903 04:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.903 04:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.903 04:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:19.903 04:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.903 04:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.903 [2024-12-06 04:01:13.046678] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:19.903 [2024-12-06 04:01:13.046710] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.903 [2024-12-06 04:01:13.046805] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.903 [2024-12-06 04:01:13.046862] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.903 [2024-12-06 04:01:13.046875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:19.903 04:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.903 04:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64531 00:10:19.903 04:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64531 ']' 00:10:19.903 04:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64531 00:10:19.903 04:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:19.903 04:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.903 04:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64531 00:10:19.903 04:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.903 04:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.903 04:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64531' 00:10:19.903 killing process with pid 64531 00:10:19.903 04:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64531 00:10:19.903 [2024-12-06 04:01:13.095294] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:19.903 04:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64531 00:10:20.162 [2024-12-06 04:01:13.408941] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:21.601 04:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:21.601 00:10:21.601 real 0m10.949s 00:10:21.601 user 0m17.420s 00:10:21.601 sys 0m1.866s 00:10:21.601 04:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.601 04:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.601 ************************************ 00:10:21.601 END TEST raid_state_function_test_sb 00:10:21.601 ************************************ 00:10:21.601 04:01:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:10:21.601 04:01:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:21.601 04:01:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.601 04:01:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:21.601 ************************************ 00:10:21.601 START TEST raid_superblock_test 00:10:21.601 ************************************ 00:10:21.601 04:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:10:21.601 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:21.601 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:21.601 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:21.601 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:21.601 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:21.601 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:21.601 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:21.601 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:21.601 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:21.601 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:21.601 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:21.601 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:21.601 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:21.601 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:21.601 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:21.601 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:21.602 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65157 00:10:21.602 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:21.602 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65157 00:10:21.602 04:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65157 ']' 00:10:21.602 04:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.602 04:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.602 04:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.602 04:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.602 04:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.602 [2024-12-06 04:01:14.750722] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:10:21.602 [2024-12-06 04:01:14.750846] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65157 ] 00:10:21.602 [2024-12-06 04:01:14.925363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.877 [2024-12-06 04:01:15.049693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.153 [2024-12-06 04:01:15.241665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.153 [2024-12-06 04:01:15.241722] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.412 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.412 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:22.412 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:22.412 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.413 malloc1 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.413 [2024-12-06 04:01:15.666648] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:22.413 [2024-12-06 04:01:15.666782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.413 [2024-12-06 04:01:15.666830] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:22.413 [2024-12-06 04:01:15.666899] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.413 [2024-12-06 04:01:15.669424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.413 [2024-12-06 04:01:15.669508] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:22.413 pt1 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.413 malloc2 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.413 [2024-12-06 04:01:15.727554] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:22.413 [2024-12-06 04:01:15.727614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.413 [2024-12-06 04:01:15.727641] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:22.413 [2024-12-06 04:01:15.727651] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.413 [2024-12-06 04:01:15.729731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.413 [2024-12-06 04:01:15.729844] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:22.413 pt2 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.413 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.672 malloc3 00:10:22.672 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.672 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:22.672 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.672 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.672 [2024-12-06 04:01:15.794687] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:22.672 [2024-12-06 04:01:15.794800] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.672 [2024-12-06 04:01:15.794846] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:22.672 [2024-12-06 04:01:15.794883] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.672 [2024-12-06 04:01:15.797313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.672 [2024-12-06 04:01:15.797401] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:22.672 pt3 00:10:22.672 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.672 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:22.672 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.672 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:22.672 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.672 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.672 [2024-12-06 04:01:15.806715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:22.672 [2024-12-06 04:01:15.808577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:22.672 [2024-12-06 04:01:15.808706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:22.672 [2024-12-06 04:01:15.808904] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:22.673 [2024-12-06 04:01:15.808957] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:22.673 [2024-12-06 04:01:15.809268] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:22.673 [2024-12-06 04:01:15.809494] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:22.673 [2024-12-06 04:01:15.809535] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:22.673 [2024-12-06 04:01:15.809732] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.673 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.673 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:22.673 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.673 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.673 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.673 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.673 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.673 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.673 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.673 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.673 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.673 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.673 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.673 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.673 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.673 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.673 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.673 "name": "raid_bdev1", 00:10:22.673 "uuid": "ab6c4480-02e6-4273-b1ac-e211e74af8c6", 00:10:22.673 "strip_size_kb": 64, 00:10:22.673 "state": "online", 00:10:22.673 "raid_level": "raid0", 00:10:22.673 "superblock": true, 00:10:22.673 "num_base_bdevs": 3, 00:10:22.673 "num_base_bdevs_discovered": 3, 00:10:22.673 "num_base_bdevs_operational": 3, 00:10:22.673 "base_bdevs_list": [ 00:10:22.673 { 00:10:22.673 "name": "pt1", 00:10:22.673 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.673 "is_configured": true, 00:10:22.673 "data_offset": 2048, 00:10:22.673 "data_size": 63488 00:10:22.673 }, 00:10:22.673 { 00:10:22.673 "name": "pt2", 00:10:22.673 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.673 "is_configured": true, 00:10:22.673 "data_offset": 2048, 00:10:22.673 "data_size": 63488 00:10:22.673 }, 00:10:22.673 { 00:10:22.673 "name": "pt3", 00:10:22.673 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.673 "is_configured": true, 00:10:22.673 "data_offset": 2048, 00:10:22.673 "data_size": 63488 00:10:22.673 } 00:10:22.673 ] 00:10:22.673 }' 00:10:22.673 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.673 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.933 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:22.933 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:22.933 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:22.933 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:22.933 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:22.933 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:22.933 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:22.933 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:22.933 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.933 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.193 [2024-12-06 04:01:16.287135] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.193 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.193 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:23.193 "name": "raid_bdev1", 00:10:23.193 "aliases": [ 00:10:23.193 "ab6c4480-02e6-4273-b1ac-e211e74af8c6" 00:10:23.193 ], 00:10:23.193 "product_name": "Raid Volume", 00:10:23.193 "block_size": 512, 00:10:23.193 "num_blocks": 190464, 00:10:23.193 "uuid": "ab6c4480-02e6-4273-b1ac-e211e74af8c6", 00:10:23.193 "assigned_rate_limits": { 00:10:23.193 "rw_ios_per_sec": 0, 00:10:23.193 "rw_mbytes_per_sec": 0, 00:10:23.193 "r_mbytes_per_sec": 0, 00:10:23.193 "w_mbytes_per_sec": 0 00:10:23.193 }, 00:10:23.193 "claimed": false, 00:10:23.193 "zoned": false, 00:10:23.193 "supported_io_types": { 00:10:23.193 "read": true, 00:10:23.193 "write": true, 00:10:23.193 "unmap": true, 00:10:23.193 "flush": true, 00:10:23.193 "reset": true, 00:10:23.193 "nvme_admin": false, 00:10:23.193 "nvme_io": false, 00:10:23.193 "nvme_io_md": false, 00:10:23.193 "write_zeroes": true, 00:10:23.193 "zcopy": false, 00:10:23.193 "get_zone_info": false, 00:10:23.193 "zone_management": false, 00:10:23.193 "zone_append": false, 00:10:23.193 "compare": false, 00:10:23.193 "compare_and_write": false, 00:10:23.193 "abort": false, 00:10:23.193 "seek_hole": false, 00:10:23.193 "seek_data": false, 00:10:23.193 "copy": false, 00:10:23.193 "nvme_iov_md": false 00:10:23.193 }, 00:10:23.193 "memory_domains": [ 00:10:23.193 { 00:10:23.193 "dma_device_id": "system", 00:10:23.193 "dma_device_type": 1 00:10:23.193 }, 00:10:23.193 { 00:10:23.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.193 "dma_device_type": 2 00:10:23.193 }, 00:10:23.193 { 00:10:23.193 "dma_device_id": "system", 00:10:23.193 "dma_device_type": 1 00:10:23.193 }, 00:10:23.193 { 00:10:23.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.193 "dma_device_type": 2 00:10:23.193 }, 00:10:23.193 { 00:10:23.193 "dma_device_id": "system", 00:10:23.193 "dma_device_type": 1 00:10:23.193 }, 00:10:23.193 { 00:10:23.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.193 "dma_device_type": 2 00:10:23.193 } 00:10:23.193 ], 00:10:23.193 "driver_specific": { 00:10:23.193 "raid": { 00:10:23.193 "uuid": "ab6c4480-02e6-4273-b1ac-e211e74af8c6", 00:10:23.193 "strip_size_kb": 64, 00:10:23.193 "state": "online", 00:10:23.193 "raid_level": "raid0", 00:10:23.193 "superblock": true, 00:10:23.193 "num_base_bdevs": 3, 00:10:23.193 "num_base_bdevs_discovered": 3, 00:10:23.193 "num_base_bdevs_operational": 3, 00:10:23.193 "base_bdevs_list": [ 00:10:23.193 { 00:10:23.193 "name": "pt1", 00:10:23.193 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.193 "is_configured": true, 00:10:23.193 "data_offset": 2048, 00:10:23.193 "data_size": 63488 00:10:23.193 }, 00:10:23.193 { 00:10:23.193 "name": "pt2", 00:10:23.193 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.193 "is_configured": true, 00:10:23.193 "data_offset": 2048, 00:10:23.193 "data_size": 63488 00:10:23.193 }, 00:10:23.193 { 00:10:23.193 "name": "pt3", 00:10:23.193 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:23.193 "is_configured": true, 00:10:23.193 "data_offset": 2048, 00:10:23.193 "data_size": 63488 00:10:23.193 } 00:10:23.193 ] 00:10:23.193 } 00:10:23.193 } 00:10:23.193 }' 00:10:23.193 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:23.193 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:23.193 pt2 00:10:23.193 pt3' 00:10:23.193 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.193 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:23.193 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.193 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:23.193 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.193 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.193 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.193 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.193 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.194 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.194 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.194 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:23.194 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.194 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.194 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.194 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.194 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.194 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.194 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.194 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:23.194 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.194 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.194 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.194 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:23.454 [2024-12-06 04:01:16.578525] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ab6c4480-02e6-4273-b1ac-e211e74af8c6 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ab6c4480-02e6-4273-b1ac-e211e74af8c6 ']' 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.454 [2024-12-06 04:01:16.626163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:23.454 [2024-12-06 04:01:16.626191] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.454 [2024-12-06 04:01:16.626262] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.454 [2024-12-06 04:01:16.626322] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.454 [2024-12-06 04:01:16.626333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.454 [2024-12-06 04:01:16.770016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:23.454 [2024-12-06 04:01:16.772007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:23.454 [2024-12-06 04:01:16.772080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:23.454 [2024-12-06 04:01:16.772134] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:23.454 [2024-12-06 04:01:16.772188] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:23.454 [2024-12-06 04:01:16.772233] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:23.454 [2024-12-06 04:01:16.772253] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:23.454 [2024-12-06 04:01:16.772266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:23.454 request: 00:10:23.454 { 00:10:23.454 "name": "raid_bdev1", 00:10:23.454 "raid_level": "raid0", 00:10:23.454 "base_bdevs": [ 00:10:23.454 "malloc1", 00:10:23.454 "malloc2", 00:10:23.454 "malloc3" 00:10:23.454 ], 00:10:23.454 "strip_size_kb": 64, 00:10:23.454 "superblock": false, 00:10:23.454 "method": "bdev_raid_create", 00:10:23.454 "req_id": 1 00:10:23.454 } 00:10:23.454 Got JSON-RPC error response 00:10:23.454 response: 00:10:23.454 { 00:10:23.454 "code": -17, 00:10:23.454 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:23.454 } 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.454 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.715 [2024-12-06 04:01:16.829858] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:23.715 [2024-12-06 04:01:16.829916] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.715 [2024-12-06 04:01:16.829936] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:23.715 [2024-12-06 04:01:16.829945] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.715 [2024-12-06 04:01:16.832207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.715 [2024-12-06 04:01:16.832330] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:23.715 [2024-12-06 04:01:16.832452] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:23.715 [2024-12-06 04:01:16.832517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:23.715 pt1 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.715 "name": "raid_bdev1", 00:10:23.715 "uuid": "ab6c4480-02e6-4273-b1ac-e211e74af8c6", 00:10:23.715 "strip_size_kb": 64, 00:10:23.715 "state": "configuring", 00:10:23.715 "raid_level": "raid0", 00:10:23.715 "superblock": true, 00:10:23.715 "num_base_bdevs": 3, 00:10:23.715 "num_base_bdevs_discovered": 1, 00:10:23.715 "num_base_bdevs_operational": 3, 00:10:23.715 "base_bdevs_list": [ 00:10:23.715 { 00:10:23.715 "name": "pt1", 00:10:23.715 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.715 "is_configured": true, 00:10:23.715 "data_offset": 2048, 00:10:23.715 "data_size": 63488 00:10:23.715 }, 00:10:23.715 { 00:10:23.715 "name": null, 00:10:23.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.715 "is_configured": false, 00:10:23.715 "data_offset": 2048, 00:10:23.715 "data_size": 63488 00:10:23.715 }, 00:10:23.715 { 00:10:23.715 "name": null, 00:10:23.715 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:23.715 "is_configured": false, 00:10:23.715 "data_offset": 2048, 00:10:23.715 "data_size": 63488 00:10:23.715 } 00:10:23.715 ] 00:10:23.715 }' 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.715 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.975 [2024-12-06 04:01:17.245168] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:23.975 [2024-12-06 04:01:17.245311] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.975 [2024-12-06 04:01:17.245364] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:23.975 [2024-12-06 04:01:17.245403] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.975 [2024-12-06 04:01:17.245896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.975 [2024-12-06 04:01:17.245964] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:23.975 [2024-12-06 04:01:17.246089] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:23.975 [2024-12-06 04:01:17.246151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:23.975 pt2 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.975 [2024-12-06 04:01:17.253145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.975 "name": "raid_bdev1", 00:10:23.975 "uuid": "ab6c4480-02e6-4273-b1ac-e211e74af8c6", 00:10:23.975 "strip_size_kb": 64, 00:10:23.975 "state": "configuring", 00:10:23.975 "raid_level": "raid0", 00:10:23.975 "superblock": true, 00:10:23.975 "num_base_bdevs": 3, 00:10:23.975 "num_base_bdevs_discovered": 1, 00:10:23.975 "num_base_bdevs_operational": 3, 00:10:23.975 "base_bdevs_list": [ 00:10:23.975 { 00:10:23.975 "name": "pt1", 00:10:23.975 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.975 "is_configured": true, 00:10:23.975 "data_offset": 2048, 00:10:23.975 "data_size": 63488 00:10:23.975 }, 00:10:23.975 { 00:10:23.975 "name": null, 00:10:23.975 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.975 "is_configured": false, 00:10:23.975 "data_offset": 0, 00:10:23.975 "data_size": 63488 00:10:23.975 }, 00:10:23.975 { 00:10:23.975 "name": null, 00:10:23.975 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:23.975 "is_configured": false, 00:10:23.975 "data_offset": 2048, 00:10:23.975 "data_size": 63488 00:10:23.975 } 00:10:23.975 ] 00:10:23.975 }' 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.975 04:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.545 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:24.545 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:24.545 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:24.545 04:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.545 04:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.545 [2024-12-06 04:01:17.708343] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:24.546 [2024-12-06 04:01:17.708472] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.546 [2024-12-06 04:01:17.708508] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:24.546 [2024-12-06 04:01:17.708538] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.546 [2024-12-06 04:01:17.709013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.546 [2024-12-06 04:01:17.709092] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:24.546 [2024-12-06 04:01:17.709204] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:24.546 [2024-12-06 04:01:17.709260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:24.546 pt2 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.546 [2024-12-06 04:01:17.720324] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:24.546 [2024-12-06 04:01:17.720409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.546 [2024-12-06 04:01:17.720439] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:24.546 [2024-12-06 04:01:17.720467] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.546 [2024-12-06 04:01:17.720851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.546 [2024-12-06 04:01:17.720910] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:24.546 [2024-12-06 04:01:17.720997] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:24.546 [2024-12-06 04:01:17.721058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:24.546 [2024-12-06 04:01:17.721212] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:24.546 [2024-12-06 04:01:17.721252] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:24.546 [2024-12-06 04:01:17.721511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:24.546 [2024-12-06 04:01:17.721700] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:24.546 [2024-12-06 04:01:17.721737] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:24.546 [2024-12-06 04:01:17.721910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.546 pt3 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.546 "name": "raid_bdev1", 00:10:24.546 "uuid": "ab6c4480-02e6-4273-b1ac-e211e74af8c6", 00:10:24.546 "strip_size_kb": 64, 00:10:24.546 "state": "online", 00:10:24.546 "raid_level": "raid0", 00:10:24.546 "superblock": true, 00:10:24.546 "num_base_bdevs": 3, 00:10:24.546 "num_base_bdevs_discovered": 3, 00:10:24.546 "num_base_bdevs_operational": 3, 00:10:24.546 "base_bdevs_list": [ 00:10:24.546 { 00:10:24.546 "name": "pt1", 00:10:24.546 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.546 "is_configured": true, 00:10:24.546 "data_offset": 2048, 00:10:24.546 "data_size": 63488 00:10:24.546 }, 00:10:24.546 { 00:10:24.546 "name": "pt2", 00:10:24.546 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.546 "is_configured": true, 00:10:24.546 "data_offset": 2048, 00:10:24.546 "data_size": 63488 00:10:24.546 }, 00:10:24.546 { 00:10:24.546 "name": "pt3", 00:10:24.546 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.546 "is_configured": true, 00:10:24.546 "data_offset": 2048, 00:10:24.546 "data_size": 63488 00:10:24.546 } 00:10:24.546 ] 00:10:24.546 }' 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.546 04:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.118 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:25.118 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:25.118 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:25.118 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:25.118 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.119 [2024-12-06 04:01:18.179903] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:25.119 "name": "raid_bdev1", 00:10:25.119 "aliases": [ 00:10:25.119 "ab6c4480-02e6-4273-b1ac-e211e74af8c6" 00:10:25.119 ], 00:10:25.119 "product_name": "Raid Volume", 00:10:25.119 "block_size": 512, 00:10:25.119 "num_blocks": 190464, 00:10:25.119 "uuid": "ab6c4480-02e6-4273-b1ac-e211e74af8c6", 00:10:25.119 "assigned_rate_limits": { 00:10:25.119 "rw_ios_per_sec": 0, 00:10:25.119 "rw_mbytes_per_sec": 0, 00:10:25.119 "r_mbytes_per_sec": 0, 00:10:25.119 "w_mbytes_per_sec": 0 00:10:25.119 }, 00:10:25.119 "claimed": false, 00:10:25.119 "zoned": false, 00:10:25.119 "supported_io_types": { 00:10:25.119 "read": true, 00:10:25.119 "write": true, 00:10:25.119 "unmap": true, 00:10:25.119 "flush": true, 00:10:25.119 "reset": true, 00:10:25.119 "nvme_admin": false, 00:10:25.119 "nvme_io": false, 00:10:25.119 "nvme_io_md": false, 00:10:25.119 "write_zeroes": true, 00:10:25.119 "zcopy": false, 00:10:25.119 "get_zone_info": false, 00:10:25.119 "zone_management": false, 00:10:25.119 "zone_append": false, 00:10:25.119 "compare": false, 00:10:25.119 "compare_and_write": false, 00:10:25.119 "abort": false, 00:10:25.119 "seek_hole": false, 00:10:25.119 "seek_data": false, 00:10:25.119 "copy": false, 00:10:25.119 "nvme_iov_md": false 00:10:25.119 }, 00:10:25.119 "memory_domains": [ 00:10:25.119 { 00:10:25.119 "dma_device_id": "system", 00:10:25.119 "dma_device_type": 1 00:10:25.119 }, 00:10:25.119 { 00:10:25.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.119 "dma_device_type": 2 00:10:25.119 }, 00:10:25.119 { 00:10:25.119 "dma_device_id": "system", 00:10:25.119 "dma_device_type": 1 00:10:25.119 }, 00:10:25.119 { 00:10:25.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.119 "dma_device_type": 2 00:10:25.119 }, 00:10:25.119 { 00:10:25.119 "dma_device_id": "system", 00:10:25.119 "dma_device_type": 1 00:10:25.119 }, 00:10:25.119 { 00:10:25.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.119 "dma_device_type": 2 00:10:25.119 } 00:10:25.119 ], 00:10:25.119 "driver_specific": { 00:10:25.119 "raid": { 00:10:25.119 "uuid": "ab6c4480-02e6-4273-b1ac-e211e74af8c6", 00:10:25.119 "strip_size_kb": 64, 00:10:25.119 "state": "online", 00:10:25.119 "raid_level": "raid0", 00:10:25.119 "superblock": true, 00:10:25.119 "num_base_bdevs": 3, 00:10:25.119 "num_base_bdevs_discovered": 3, 00:10:25.119 "num_base_bdevs_operational": 3, 00:10:25.119 "base_bdevs_list": [ 00:10:25.119 { 00:10:25.119 "name": "pt1", 00:10:25.119 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.119 "is_configured": true, 00:10:25.119 "data_offset": 2048, 00:10:25.119 "data_size": 63488 00:10:25.119 }, 00:10:25.119 { 00:10:25.119 "name": "pt2", 00:10:25.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.119 "is_configured": true, 00:10:25.119 "data_offset": 2048, 00:10:25.119 "data_size": 63488 00:10:25.119 }, 00:10:25.119 { 00:10:25.119 "name": "pt3", 00:10:25.119 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.119 "is_configured": true, 00:10:25.119 "data_offset": 2048, 00:10:25.119 "data_size": 63488 00:10:25.119 } 00:10:25.119 ] 00:10:25.119 } 00:10:25.119 } 00:10:25.119 }' 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:25.119 pt2 00:10:25.119 pt3' 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.119 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.120 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.120 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.120 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.120 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.120 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:25.120 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.120 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.120 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.120 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.120 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.120 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.120 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:25.120 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.120 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:25.120 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.120 [2024-12-06 04:01:18.443466] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.120 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.383 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ab6c4480-02e6-4273-b1ac-e211e74af8c6 '!=' ab6c4480-02e6-4273-b1ac-e211e74af8c6 ']' 00:10:25.383 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:25.383 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.383 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:25.383 04:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65157 00:10:25.383 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65157 ']' 00:10:25.383 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65157 00:10:25.383 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:25.383 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.383 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65157 00:10:25.383 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.383 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.383 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65157' 00:10:25.383 killing process with pid 65157 00:10:25.383 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65157 00:10:25.383 [2024-12-06 04:01:18.519673] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:25.383 04:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65157 00:10:25.383 [2024-12-06 04:01:18.519825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.383 [2024-12-06 04:01:18.519891] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.383 [2024-12-06 04:01:18.519909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:25.643 [2024-12-06 04:01:18.832540] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:27.022 04:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:27.023 00:10:27.023 real 0m5.342s 00:10:27.023 user 0m7.662s 00:10:27.023 sys 0m0.874s 00:10:27.023 04:01:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.023 04:01:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.023 ************************************ 00:10:27.023 END TEST raid_superblock_test 00:10:27.023 ************************************ 00:10:27.023 04:01:20 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:10:27.023 04:01:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:27.023 04:01:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.023 04:01:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:27.023 ************************************ 00:10:27.023 START TEST raid_read_error_test 00:10:27.023 ************************************ 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.funqZ40fgC 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65410 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65410 00:10:27.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65410 ']' 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.023 04:01:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.023 [2024-12-06 04:01:20.174874] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:10:27.023 [2024-12-06 04:01:20.175107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65410 ] 00:10:27.023 [2024-12-06 04:01:20.351868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.281 [2024-12-06 04:01:20.464342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.540 [2024-12-06 04:01:20.676459] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.540 [2024-12-06 04:01:20.676520] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.799 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.799 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:27.799 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:27.799 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:27.799 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.800 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.800 BaseBdev1_malloc 00:10:27.800 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.800 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:27.800 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.800 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.800 true 00:10:27.800 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.800 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:27.800 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.800 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.800 [2024-12-06 04:01:21.081556] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:27.800 [2024-12-06 04:01:21.081663] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.800 [2024-12-06 04:01:21.081701] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:27.800 [2024-12-06 04:01:21.081743] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.800 [2024-12-06 04:01:21.083991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.800 [2024-12-06 04:01:21.084081] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:27.800 BaseBdev1 00:10:27.800 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.800 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:27.800 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:27.800 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.800 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.800 BaseBdev2_malloc 00:10:27.800 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.800 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:27.800 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.800 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.800 true 00:10:27.800 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.800 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:27.800 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.800 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.800 [2024-12-06 04:01:21.151324] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:27.800 [2024-12-06 04:01:21.151435] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.800 [2024-12-06 04:01:21.151474] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:27.800 [2024-12-06 04:01:21.151512] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.060 [2024-12-06 04:01:21.153731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.060 [2024-12-06 04:01:21.153811] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:28.060 BaseBdev2 00:10:28.060 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.060 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.060 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:28.060 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.060 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.060 BaseBdev3_malloc 00:10:28.060 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.060 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:28.060 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.060 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.060 true 00:10:28.060 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.060 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:28.060 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.060 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.060 [2024-12-06 04:01:21.235043] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:28.060 [2024-12-06 04:01:21.235182] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.060 [2024-12-06 04:01:21.235250] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:28.060 [2024-12-06 04:01:21.235343] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.060 [2024-12-06 04:01:21.238152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.060 [2024-12-06 04:01:21.238252] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:28.060 BaseBdev3 00:10:28.060 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.060 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:28.060 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.060 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.060 [2024-12-06 04:01:21.247228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.061 [2024-12-06 04:01:21.249664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.061 [2024-12-06 04:01:21.249834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:28.061 [2024-12-06 04:01:21.250160] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:28.061 [2024-12-06 04:01:21.250241] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:28.061 [2024-12-06 04:01:21.250632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:28.061 [2024-12-06 04:01:21.250923] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:28.061 [2024-12-06 04:01:21.251002] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:28.061 [2024-12-06 04:01:21.251365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.061 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.061 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:28.061 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.061 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.061 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.061 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.061 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.061 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.061 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.061 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.061 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.061 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.061 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.061 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.061 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.061 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.061 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.061 "name": "raid_bdev1", 00:10:28.061 "uuid": "9e28ccd0-6f9c-4864-9233-06b3ac5fe414", 00:10:28.061 "strip_size_kb": 64, 00:10:28.061 "state": "online", 00:10:28.061 "raid_level": "raid0", 00:10:28.061 "superblock": true, 00:10:28.061 "num_base_bdevs": 3, 00:10:28.061 "num_base_bdevs_discovered": 3, 00:10:28.061 "num_base_bdevs_operational": 3, 00:10:28.061 "base_bdevs_list": [ 00:10:28.061 { 00:10:28.061 "name": "BaseBdev1", 00:10:28.061 "uuid": "360a5ca3-2f6f-578a-bd39-4bbc997f18c1", 00:10:28.061 "is_configured": true, 00:10:28.061 "data_offset": 2048, 00:10:28.061 "data_size": 63488 00:10:28.061 }, 00:10:28.061 { 00:10:28.061 "name": "BaseBdev2", 00:10:28.061 "uuid": "cb364070-2334-52df-b887-2b8260ee6cc6", 00:10:28.061 "is_configured": true, 00:10:28.061 "data_offset": 2048, 00:10:28.061 "data_size": 63488 00:10:28.061 }, 00:10:28.061 { 00:10:28.061 "name": "BaseBdev3", 00:10:28.061 "uuid": "d475254f-a6a9-551c-b14d-f362786d373c", 00:10:28.061 "is_configured": true, 00:10:28.061 "data_offset": 2048, 00:10:28.061 "data_size": 63488 00:10:28.061 } 00:10:28.061 ] 00:10:28.061 }' 00:10:28.061 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.061 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.629 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:28.629 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:28.629 [2024-12-06 04:01:21.795636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.566 "name": "raid_bdev1", 00:10:29.566 "uuid": "9e28ccd0-6f9c-4864-9233-06b3ac5fe414", 00:10:29.566 "strip_size_kb": 64, 00:10:29.566 "state": "online", 00:10:29.566 "raid_level": "raid0", 00:10:29.566 "superblock": true, 00:10:29.566 "num_base_bdevs": 3, 00:10:29.566 "num_base_bdevs_discovered": 3, 00:10:29.566 "num_base_bdevs_operational": 3, 00:10:29.566 "base_bdevs_list": [ 00:10:29.566 { 00:10:29.566 "name": "BaseBdev1", 00:10:29.566 "uuid": "360a5ca3-2f6f-578a-bd39-4bbc997f18c1", 00:10:29.566 "is_configured": true, 00:10:29.566 "data_offset": 2048, 00:10:29.566 "data_size": 63488 00:10:29.566 }, 00:10:29.566 { 00:10:29.566 "name": "BaseBdev2", 00:10:29.566 "uuid": "cb364070-2334-52df-b887-2b8260ee6cc6", 00:10:29.566 "is_configured": true, 00:10:29.566 "data_offset": 2048, 00:10:29.566 "data_size": 63488 00:10:29.566 }, 00:10:29.566 { 00:10:29.566 "name": "BaseBdev3", 00:10:29.566 "uuid": "d475254f-a6a9-551c-b14d-f362786d373c", 00:10:29.566 "is_configured": true, 00:10:29.566 "data_offset": 2048, 00:10:29.566 "data_size": 63488 00:10:29.566 } 00:10:29.566 ] 00:10:29.566 }' 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.566 04:01:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.826 04:01:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:29.826 04:01:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.826 04:01:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.826 [2024-12-06 04:01:23.155691] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:29.826 [2024-12-06 04:01:23.155777] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.827 [2024-12-06 04:01:23.158721] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.827 [2024-12-06 04:01:23.158810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.827 [2024-12-06 04:01:23.158867] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.827 [2024-12-06 04:01:23.158908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:29.827 { 00:10:29.827 "results": [ 00:10:29.827 { 00:10:29.827 "job": "raid_bdev1", 00:10:29.827 "core_mask": "0x1", 00:10:29.827 "workload": "randrw", 00:10:29.827 "percentage": 50, 00:10:29.827 "status": "finished", 00:10:29.827 "queue_depth": 1, 00:10:29.827 "io_size": 131072, 00:10:29.827 "runtime": 1.360908, 00:10:29.827 "iops": 14819.517557395504, 00:10:29.827 "mibps": 1852.439694674438, 00:10:29.827 "io_failed": 1, 00:10:29.827 "io_timeout": 0, 00:10:29.827 "avg_latency_us": 93.40942849515481, 00:10:29.827 "min_latency_us": 20.79301310043668, 00:10:29.827 "max_latency_us": 1452.380786026201 00:10:29.827 } 00:10:29.827 ], 00:10:29.827 "core_count": 1 00:10:29.827 } 00:10:29.827 04:01:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.827 04:01:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65410 00:10:29.827 04:01:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65410 ']' 00:10:29.827 04:01:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65410 00:10:29.827 04:01:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:29.827 04:01:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:29.827 04:01:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65410 00:10:30.088 04:01:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.088 04:01:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.088 04:01:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65410' 00:10:30.088 killing process with pid 65410 00:10:30.088 04:01:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65410 00:10:30.088 [2024-12-06 04:01:23.197970] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:30.088 04:01:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65410 00:10:30.088 [2024-12-06 04:01:23.436274] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:31.494 04:01:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.funqZ40fgC 00:10:31.494 04:01:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:31.494 04:01:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:31.494 04:01:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:31.494 04:01:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:31.494 04:01:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:31.494 04:01:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:31.494 ************************************ 00:10:31.494 END TEST raid_read_error_test 00:10:31.494 ************************************ 00:10:31.494 04:01:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:31.494 00:10:31.494 real 0m4.615s 00:10:31.494 user 0m5.444s 00:10:31.494 sys 0m0.567s 00:10:31.494 04:01:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.494 04:01:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.494 04:01:24 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:10:31.494 04:01:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:31.494 04:01:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.494 04:01:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:31.494 ************************************ 00:10:31.494 START TEST raid_write_error_test 00:10:31.494 ************************************ 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Z9ihy5uv2k 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65556 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65556 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65556 ']' 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.494 04:01:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.751 [2024-12-06 04:01:24.855425] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:10:31.751 [2024-12-06 04:01:24.855638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65556 ] 00:10:31.751 [2024-12-06 04:01:25.027264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.009 [2024-12-06 04:01:25.144121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.009 [2024-12-06 04:01:25.353889] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.009 [2024-12-06 04:01:25.354035] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.573 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:32.573 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:32.573 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:32.573 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:32.573 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.573 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.573 BaseBdev1_malloc 00:10:32.573 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.573 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:32.573 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.573 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.573 true 00:10:32.573 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.573 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:32.573 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.573 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.573 [2024-12-06 04:01:25.786584] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:32.573 [2024-12-06 04:01:25.786643] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.573 [2024-12-06 04:01:25.786662] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:32.573 [2024-12-06 04:01:25.786673] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.573 [2024-12-06 04:01:25.789089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.573 [2024-12-06 04:01:25.789219] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:32.573 BaseBdev1 00:10:32.574 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.574 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:32.574 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:32.574 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.574 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.574 BaseBdev2_malloc 00:10:32.574 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.574 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:32.574 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.574 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.574 true 00:10:32.574 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.574 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:32.574 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.574 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.574 [2024-12-06 04:01:25.856310] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:32.574 [2024-12-06 04:01:25.856441] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.574 [2024-12-06 04:01:25.856487] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:32.574 [2024-12-06 04:01:25.856501] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.574 [2024-12-06 04:01:25.859067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.574 [2024-12-06 04:01:25.859106] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:32.574 BaseBdev2 00:10:32.574 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.574 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:32.574 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:32.574 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.574 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.574 BaseBdev3_malloc 00:10:32.574 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.574 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:32.574 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.574 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.833 true 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.833 [2024-12-06 04:01:25.939349] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:32.833 [2024-12-06 04:01:25.939417] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.833 [2024-12-06 04:01:25.939449] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:32.833 [2024-12-06 04:01:25.939460] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.833 [2024-12-06 04:01:25.941656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.833 [2024-12-06 04:01:25.941742] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:32.833 BaseBdev3 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.833 [2024-12-06 04:01:25.951417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.833 [2024-12-06 04:01:25.953335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:32.833 [2024-12-06 04:01:25.953450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:32.833 [2024-12-06 04:01:25.953678] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:32.833 [2024-12-06 04:01:25.953731] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:32.833 [2024-12-06 04:01:25.954005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:32.833 [2024-12-06 04:01:25.954257] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:32.833 [2024-12-06 04:01:25.954309] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:32.833 [2024-12-06 04:01:25.954503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.833 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.833 04:01:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.833 "name": "raid_bdev1", 00:10:32.833 "uuid": "620722fe-82a0-4a99-a02c-aca970707bae", 00:10:32.833 "strip_size_kb": 64, 00:10:32.833 "state": "online", 00:10:32.833 "raid_level": "raid0", 00:10:32.833 "superblock": true, 00:10:32.833 "num_base_bdevs": 3, 00:10:32.833 "num_base_bdevs_discovered": 3, 00:10:32.833 "num_base_bdevs_operational": 3, 00:10:32.833 "base_bdevs_list": [ 00:10:32.833 { 00:10:32.833 "name": "BaseBdev1", 00:10:32.833 "uuid": "a2a61131-363d-5066-b470-2dda36d094e2", 00:10:32.833 "is_configured": true, 00:10:32.833 "data_offset": 2048, 00:10:32.833 "data_size": 63488 00:10:32.833 }, 00:10:32.833 { 00:10:32.833 "name": "BaseBdev2", 00:10:32.833 "uuid": "cacf6b34-868e-5f43-8308-920fe59a816d", 00:10:32.833 "is_configured": true, 00:10:32.833 "data_offset": 2048, 00:10:32.833 "data_size": 63488 00:10:32.833 }, 00:10:32.833 { 00:10:32.833 "name": "BaseBdev3", 00:10:32.833 "uuid": "beb82e1b-893d-56b5-8c0c-01de0aa4754b", 00:10:32.833 "is_configured": true, 00:10:32.833 "data_offset": 2048, 00:10:32.833 "data_size": 63488 00:10:32.833 } 00:10:32.833 ] 00:10:32.833 }' 00:10:32.833 04:01:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.833 04:01:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.092 04:01:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:33.092 04:01:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:33.351 [2024-12-06 04:01:26.455787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:34.294 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:34.294 04:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.294 04:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.294 04:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.294 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:34.294 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:34.294 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:34.294 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:34.294 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:34.294 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.294 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.294 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.294 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.294 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.294 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.294 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.294 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.294 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.294 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.294 04:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.294 04:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.294 04:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.294 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.294 "name": "raid_bdev1", 00:10:34.294 "uuid": "620722fe-82a0-4a99-a02c-aca970707bae", 00:10:34.294 "strip_size_kb": 64, 00:10:34.294 "state": "online", 00:10:34.294 "raid_level": "raid0", 00:10:34.294 "superblock": true, 00:10:34.294 "num_base_bdevs": 3, 00:10:34.294 "num_base_bdevs_discovered": 3, 00:10:34.294 "num_base_bdevs_operational": 3, 00:10:34.294 "base_bdevs_list": [ 00:10:34.294 { 00:10:34.294 "name": "BaseBdev1", 00:10:34.294 "uuid": "a2a61131-363d-5066-b470-2dda36d094e2", 00:10:34.294 "is_configured": true, 00:10:34.294 "data_offset": 2048, 00:10:34.294 "data_size": 63488 00:10:34.294 }, 00:10:34.294 { 00:10:34.294 "name": "BaseBdev2", 00:10:34.294 "uuid": "cacf6b34-868e-5f43-8308-920fe59a816d", 00:10:34.294 "is_configured": true, 00:10:34.294 "data_offset": 2048, 00:10:34.294 "data_size": 63488 00:10:34.294 }, 00:10:34.295 { 00:10:34.295 "name": "BaseBdev3", 00:10:34.295 "uuid": "beb82e1b-893d-56b5-8c0c-01de0aa4754b", 00:10:34.295 "is_configured": true, 00:10:34.295 "data_offset": 2048, 00:10:34.295 "data_size": 63488 00:10:34.295 } 00:10:34.295 ] 00:10:34.295 }' 00:10:34.295 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.295 04:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.554 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:34.554 04:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.554 04:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.554 [2024-12-06 04:01:27.812180] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:34.554 [2024-12-06 04:01:27.812292] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.554 [2024-12-06 04:01:27.815345] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.554 [2024-12-06 04:01:27.815430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.554 [2024-12-06 04:01:27.815490] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:34.554 [2024-12-06 04:01:27.815529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:34.554 { 00:10:34.554 "results": [ 00:10:34.554 { 00:10:34.554 "job": "raid_bdev1", 00:10:34.554 "core_mask": "0x1", 00:10:34.554 "workload": "randrw", 00:10:34.554 "percentage": 50, 00:10:34.554 "status": "finished", 00:10:34.554 "queue_depth": 1, 00:10:34.554 "io_size": 131072, 00:10:34.554 "runtime": 1.3573, 00:10:34.554 "iops": 14389.59699403227, 00:10:34.554 "mibps": 1798.6996242540338, 00:10:34.554 "io_failed": 1, 00:10:34.554 "io_timeout": 0, 00:10:34.554 "avg_latency_us": 96.09149218346872, 00:10:34.554 "min_latency_us": 28.50655021834061, 00:10:34.554 "max_latency_us": 1638.4 00:10:34.554 } 00:10:34.554 ], 00:10:34.554 "core_count": 1 00:10:34.554 } 00:10:34.554 04:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.554 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65556 00:10:34.554 04:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65556 ']' 00:10:34.554 04:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65556 00:10:34.554 04:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:34.554 04:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.554 04:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65556 00:10:34.554 killing process with pid 65556 00:10:34.554 04:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.554 04:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.554 04:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65556' 00:10:34.554 04:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65556 00:10:34.554 [2024-12-06 04:01:27.861331] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:34.554 04:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65556 00:10:34.812 [2024-12-06 04:01:28.101238] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:36.193 04:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Z9ihy5uv2k 00:10:36.193 04:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:36.193 04:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:36.193 04:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:36.193 04:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:36.193 04:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:36.193 04:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:36.193 04:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:36.193 00:10:36.193 real 0m4.589s 00:10:36.193 user 0m5.436s 00:10:36.193 sys 0m0.545s 00:10:36.194 04:01:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.194 04:01:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.194 ************************************ 00:10:36.194 END TEST raid_write_error_test 00:10:36.194 ************************************ 00:10:36.194 04:01:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:36.194 04:01:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:10:36.194 04:01:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:36.194 04:01:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.194 04:01:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:36.194 ************************************ 00:10:36.194 START TEST raid_state_function_test 00:10:36.194 ************************************ 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65694 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65694' 00:10:36.194 Process raid pid: 65694 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65694 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65694 ']' 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.194 04:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.194 [2024-12-06 04:01:29.507397] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:10:36.194 [2024-12-06 04:01:29.507505] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.453 [2024-12-06 04:01:29.668555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.453 [2024-12-06 04:01:29.788937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.711 [2024-12-06 04:01:30.008400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.711 [2024-12-06 04:01:30.008441] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.280 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.280 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:37.280 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:37.280 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.280 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.280 [2024-12-06 04:01:30.361442] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:37.280 [2024-12-06 04:01:30.361559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:37.280 [2024-12-06 04:01:30.361608] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:37.280 [2024-12-06 04:01:30.361636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:37.280 [2024-12-06 04:01:30.361658] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:37.280 [2024-12-06 04:01:30.361682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:37.280 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.280 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:37.280 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.280 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.280 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.280 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.280 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.280 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.280 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.280 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.280 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.280 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.280 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.281 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.281 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.281 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.281 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.281 "name": "Existed_Raid", 00:10:37.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.281 "strip_size_kb": 64, 00:10:37.281 "state": "configuring", 00:10:37.281 "raid_level": "concat", 00:10:37.281 "superblock": false, 00:10:37.281 "num_base_bdevs": 3, 00:10:37.281 "num_base_bdevs_discovered": 0, 00:10:37.281 "num_base_bdevs_operational": 3, 00:10:37.281 "base_bdevs_list": [ 00:10:37.281 { 00:10:37.281 "name": "BaseBdev1", 00:10:37.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.281 "is_configured": false, 00:10:37.281 "data_offset": 0, 00:10:37.281 "data_size": 0 00:10:37.281 }, 00:10:37.281 { 00:10:37.281 "name": "BaseBdev2", 00:10:37.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.281 "is_configured": false, 00:10:37.281 "data_offset": 0, 00:10:37.281 "data_size": 0 00:10:37.281 }, 00:10:37.281 { 00:10:37.281 "name": "BaseBdev3", 00:10:37.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.281 "is_configured": false, 00:10:37.281 "data_offset": 0, 00:10:37.281 "data_size": 0 00:10:37.281 } 00:10:37.281 ] 00:10:37.281 }' 00:10:37.281 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.281 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.540 [2024-12-06 04:01:30.752722] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:37.540 [2024-12-06 04:01:30.752804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.540 [2024-12-06 04:01:30.760708] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:37.540 [2024-12-06 04:01:30.760811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:37.540 [2024-12-06 04:01:30.760846] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:37.540 [2024-12-06 04:01:30.760874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:37.540 [2024-12-06 04:01:30.760897] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:37.540 [2024-12-06 04:01:30.760923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.540 BaseBdev1 00:10:37.540 [2024-12-06 04:01:30.808576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.540 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.540 [ 00:10:37.540 { 00:10:37.540 "name": "BaseBdev1", 00:10:37.540 "aliases": [ 00:10:37.540 "358ac6f8-3fa4-413e-882e-ed0967bb16bc" 00:10:37.540 ], 00:10:37.540 "product_name": "Malloc disk", 00:10:37.540 "block_size": 512, 00:10:37.540 "num_blocks": 65536, 00:10:37.540 "uuid": "358ac6f8-3fa4-413e-882e-ed0967bb16bc", 00:10:37.540 "assigned_rate_limits": { 00:10:37.540 "rw_ios_per_sec": 0, 00:10:37.540 "rw_mbytes_per_sec": 0, 00:10:37.540 "r_mbytes_per_sec": 0, 00:10:37.540 "w_mbytes_per_sec": 0 00:10:37.540 }, 00:10:37.540 "claimed": true, 00:10:37.541 "claim_type": "exclusive_write", 00:10:37.541 "zoned": false, 00:10:37.541 "supported_io_types": { 00:10:37.541 "read": true, 00:10:37.541 "write": true, 00:10:37.541 "unmap": true, 00:10:37.541 "flush": true, 00:10:37.541 "reset": true, 00:10:37.541 "nvme_admin": false, 00:10:37.541 "nvme_io": false, 00:10:37.541 "nvme_io_md": false, 00:10:37.541 "write_zeroes": true, 00:10:37.541 "zcopy": true, 00:10:37.541 "get_zone_info": false, 00:10:37.541 "zone_management": false, 00:10:37.541 "zone_append": false, 00:10:37.541 "compare": false, 00:10:37.541 "compare_and_write": false, 00:10:37.541 "abort": true, 00:10:37.541 "seek_hole": false, 00:10:37.541 "seek_data": false, 00:10:37.541 "copy": true, 00:10:37.541 "nvme_iov_md": false 00:10:37.541 }, 00:10:37.541 "memory_domains": [ 00:10:37.541 { 00:10:37.541 "dma_device_id": "system", 00:10:37.541 "dma_device_type": 1 00:10:37.541 }, 00:10:37.541 { 00:10:37.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.541 "dma_device_type": 2 00:10:37.541 } 00:10:37.541 ], 00:10:37.541 "driver_specific": {} 00:10:37.541 } 00:10:37.541 ] 00:10:37.541 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.541 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:37.541 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:37.541 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.541 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.541 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.541 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.541 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.541 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.541 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.541 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.541 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.541 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.541 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.541 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.541 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.541 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.541 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.541 "name": "Existed_Raid", 00:10:37.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.541 "strip_size_kb": 64, 00:10:37.541 "state": "configuring", 00:10:37.541 "raid_level": "concat", 00:10:37.541 "superblock": false, 00:10:37.541 "num_base_bdevs": 3, 00:10:37.541 "num_base_bdevs_discovered": 1, 00:10:37.541 "num_base_bdevs_operational": 3, 00:10:37.541 "base_bdevs_list": [ 00:10:37.541 { 00:10:37.541 "name": "BaseBdev1", 00:10:37.541 "uuid": "358ac6f8-3fa4-413e-882e-ed0967bb16bc", 00:10:37.541 "is_configured": true, 00:10:37.541 "data_offset": 0, 00:10:37.541 "data_size": 65536 00:10:37.541 }, 00:10:37.541 { 00:10:37.541 "name": "BaseBdev2", 00:10:37.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.541 "is_configured": false, 00:10:37.541 "data_offset": 0, 00:10:37.541 "data_size": 0 00:10:37.541 }, 00:10:37.541 { 00:10:37.541 "name": "BaseBdev3", 00:10:37.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.541 "is_configured": false, 00:10:37.541 "data_offset": 0, 00:10:37.541 "data_size": 0 00:10:37.541 } 00:10:37.541 ] 00:10:37.541 }' 00:10:37.541 04:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.541 04:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.110 [2024-12-06 04:01:31.271876] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.110 [2024-12-06 04:01:31.271984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.110 [2024-12-06 04:01:31.279914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.110 [2024-12-06 04:01:31.282027] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.110 [2024-12-06 04:01:31.282129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.110 [2024-12-06 04:01:31.282181] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:38.110 [2024-12-06 04:01:31.282211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.110 "name": "Existed_Raid", 00:10:38.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.110 "strip_size_kb": 64, 00:10:38.110 "state": "configuring", 00:10:38.110 "raid_level": "concat", 00:10:38.110 "superblock": false, 00:10:38.110 "num_base_bdevs": 3, 00:10:38.110 "num_base_bdevs_discovered": 1, 00:10:38.110 "num_base_bdevs_operational": 3, 00:10:38.110 "base_bdevs_list": [ 00:10:38.110 { 00:10:38.110 "name": "BaseBdev1", 00:10:38.110 "uuid": "358ac6f8-3fa4-413e-882e-ed0967bb16bc", 00:10:38.110 "is_configured": true, 00:10:38.110 "data_offset": 0, 00:10:38.110 "data_size": 65536 00:10:38.110 }, 00:10:38.110 { 00:10:38.110 "name": "BaseBdev2", 00:10:38.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.110 "is_configured": false, 00:10:38.110 "data_offset": 0, 00:10:38.110 "data_size": 0 00:10:38.110 }, 00:10:38.110 { 00:10:38.110 "name": "BaseBdev3", 00:10:38.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.110 "is_configured": false, 00:10:38.110 "data_offset": 0, 00:10:38.110 "data_size": 0 00:10:38.110 } 00:10:38.110 ] 00:10:38.110 }' 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.110 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.679 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:38.679 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.679 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.679 [2024-12-06 04:01:31.761497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.679 BaseBdev2 00:10:38.679 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.679 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.680 [ 00:10:38.680 { 00:10:38.680 "name": "BaseBdev2", 00:10:38.680 "aliases": [ 00:10:38.680 "9378fd35-08fb-4438-8aa4-5d93233271be" 00:10:38.680 ], 00:10:38.680 "product_name": "Malloc disk", 00:10:38.680 "block_size": 512, 00:10:38.680 "num_blocks": 65536, 00:10:38.680 "uuid": "9378fd35-08fb-4438-8aa4-5d93233271be", 00:10:38.680 "assigned_rate_limits": { 00:10:38.680 "rw_ios_per_sec": 0, 00:10:38.680 "rw_mbytes_per_sec": 0, 00:10:38.680 "r_mbytes_per_sec": 0, 00:10:38.680 "w_mbytes_per_sec": 0 00:10:38.680 }, 00:10:38.680 "claimed": true, 00:10:38.680 "claim_type": "exclusive_write", 00:10:38.680 "zoned": false, 00:10:38.680 "supported_io_types": { 00:10:38.680 "read": true, 00:10:38.680 "write": true, 00:10:38.680 "unmap": true, 00:10:38.680 "flush": true, 00:10:38.680 "reset": true, 00:10:38.680 "nvme_admin": false, 00:10:38.680 "nvme_io": false, 00:10:38.680 "nvme_io_md": false, 00:10:38.680 "write_zeroes": true, 00:10:38.680 "zcopy": true, 00:10:38.680 "get_zone_info": false, 00:10:38.680 "zone_management": false, 00:10:38.680 "zone_append": false, 00:10:38.680 "compare": false, 00:10:38.680 "compare_and_write": false, 00:10:38.680 "abort": true, 00:10:38.680 "seek_hole": false, 00:10:38.680 "seek_data": false, 00:10:38.680 "copy": true, 00:10:38.680 "nvme_iov_md": false 00:10:38.680 }, 00:10:38.680 "memory_domains": [ 00:10:38.680 { 00:10:38.680 "dma_device_id": "system", 00:10:38.680 "dma_device_type": 1 00:10:38.680 }, 00:10:38.680 { 00:10:38.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.680 "dma_device_type": 2 00:10:38.680 } 00:10:38.680 ], 00:10:38.680 "driver_specific": {} 00:10:38.680 } 00:10:38.680 ] 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.680 "name": "Existed_Raid", 00:10:38.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.680 "strip_size_kb": 64, 00:10:38.680 "state": "configuring", 00:10:38.680 "raid_level": "concat", 00:10:38.680 "superblock": false, 00:10:38.680 "num_base_bdevs": 3, 00:10:38.680 "num_base_bdevs_discovered": 2, 00:10:38.680 "num_base_bdevs_operational": 3, 00:10:38.680 "base_bdevs_list": [ 00:10:38.680 { 00:10:38.680 "name": "BaseBdev1", 00:10:38.680 "uuid": "358ac6f8-3fa4-413e-882e-ed0967bb16bc", 00:10:38.680 "is_configured": true, 00:10:38.680 "data_offset": 0, 00:10:38.680 "data_size": 65536 00:10:38.680 }, 00:10:38.680 { 00:10:38.680 "name": "BaseBdev2", 00:10:38.680 "uuid": "9378fd35-08fb-4438-8aa4-5d93233271be", 00:10:38.680 "is_configured": true, 00:10:38.680 "data_offset": 0, 00:10:38.680 "data_size": 65536 00:10:38.680 }, 00:10:38.680 { 00:10:38.680 "name": "BaseBdev3", 00:10:38.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.680 "is_configured": false, 00:10:38.680 "data_offset": 0, 00:10:38.680 "data_size": 0 00:10:38.680 } 00:10:38.680 ] 00:10:38.680 }' 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.680 04:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.940 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:38.940 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.940 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.200 [2024-12-06 04:01:32.319695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.200 [2024-12-06 04:01:32.319816] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:39.200 [2024-12-06 04:01:32.319846] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:39.200 [2024-12-06 04:01:32.320195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:39.200 [2024-12-06 04:01:32.320462] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:39.200 [2024-12-06 04:01:32.320511] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:39.200 [2024-12-06 04:01:32.320847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.200 BaseBdev3 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.200 [ 00:10:39.200 { 00:10:39.200 "name": "BaseBdev3", 00:10:39.200 "aliases": [ 00:10:39.200 "c44f39c3-a567-4012-b0e1-260f60452fec" 00:10:39.200 ], 00:10:39.200 "product_name": "Malloc disk", 00:10:39.200 "block_size": 512, 00:10:39.200 "num_blocks": 65536, 00:10:39.200 "uuid": "c44f39c3-a567-4012-b0e1-260f60452fec", 00:10:39.200 "assigned_rate_limits": { 00:10:39.200 "rw_ios_per_sec": 0, 00:10:39.200 "rw_mbytes_per_sec": 0, 00:10:39.200 "r_mbytes_per_sec": 0, 00:10:39.200 "w_mbytes_per_sec": 0 00:10:39.200 }, 00:10:39.200 "claimed": true, 00:10:39.200 "claim_type": "exclusive_write", 00:10:39.200 "zoned": false, 00:10:39.200 "supported_io_types": { 00:10:39.200 "read": true, 00:10:39.200 "write": true, 00:10:39.200 "unmap": true, 00:10:39.200 "flush": true, 00:10:39.200 "reset": true, 00:10:39.200 "nvme_admin": false, 00:10:39.200 "nvme_io": false, 00:10:39.200 "nvme_io_md": false, 00:10:39.200 "write_zeroes": true, 00:10:39.200 "zcopy": true, 00:10:39.200 "get_zone_info": false, 00:10:39.200 "zone_management": false, 00:10:39.200 "zone_append": false, 00:10:39.200 "compare": false, 00:10:39.200 "compare_and_write": false, 00:10:39.200 "abort": true, 00:10:39.200 "seek_hole": false, 00:10:39.200 "seek_data": false, 00:10:39.200 "copy": true, 00:10:39.200 "nvme_iov_md": false 00:10:39.200 }, 00:10:39.200 "memory_domains": [ 00:10:39.200 { 00:10:39.200 "dma_device_id": "system", 00:10:39.200 "dma_device_type": 1 00:10:39.200 }, 00:10:39.200 { 00:10:39.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.200 "dma_device_type": 2 00:10:39.200 } 00:10:39.200 ], 00:10:39.200 "driver_specific": {} 00:10:39.200 } 00:10:39.200 ] 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.200 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.201 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.201 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.201 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.201 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.201 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.201 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.201 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.201 "name": "Existed_Raid", 00:10:39.201 "uuid": "f38c2c1f-1df8-41e3-ad98-96e2bfd0acc6", 00:10:39.201 "strip_size_kb": 64, 00:10:39.201 "state": "online", 00:10:39.201 "raid_level": "concat", 00:10:39.201 "superblock": false, 00:10:39.201 "num_base_bdevs": 3, 00:10:39.201 "num_base_bdevs_discovered": 3, 00:10:39.201 "num_base_bdevs_operational": 3, 00:10:39.201 "base_bdevs_list": [ 00:10:39.201 { 00:10:39.201 "name": "BaseBdev1", 00:10:39.201 "uuid": "358ac6f8-3fa4-413e-882e-ed0967bb16bc", 00:10:39.201 "is_configured": true, 00:10:39.201 "data_offset": 0, 00:10:39.201 "data_size": 65536 00:10:39.201 }, 00:10:39.201 { 00:10:39.201 "name": "BaseBdev2", 00:10:39.201 "uuid": "9378fd35-08fb-4438-8aa4-5d93233271be", 00:10:39.201 "is_configured": true, 00:10:39.201 "data_offset": 0, 00:10:39.201 "data_size": 65536 00:10:39.201 }, 00:10:39.201 { 00:10:39.201 "name": "BaseBdev3", 00:10:39.201 "uuid": "c44f39c3-a567-4012-b0e1-260f60452fec", 00:10:39.201 "is_configured": true, 00:10:39.201 "data_offset": 0, 00:10:39.201 "data_size": 65536 00:10:39.201 } 00:10:39.201 ] 00:10:39.201 }' 00:10:39.201 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.201 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.461 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:39.461 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:39.461 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:39.461 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:39.461 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:39.461 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:39.461 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:39.461 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.461 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.461 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:39.461 [2024-12-06 04:01:32.795278] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.461 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.721 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:39.721 "name": "Existed_Raid", 00:10:39.721 "aliases": [ 00:10:39.721 "f38c2c1f-1df8-41e3-ad98-96e2bfd0acc6" 00:10:39.721 ], 00:10:39.721 "product_name": "Raid Volume", 00:10:39.721 "block_size": 512, 00:10:39.721 "num_blocks": 196608, 00:10:39.721 "uuid": "f38c2c1f-1df8-41e3-ad98-96e2bfd0acc6", 00:10:39.721 "assigned_rate_limits": { 00:10:39.721 "rw_ios_per_sec": 0, 00:10:39.721 "rw_mbytes_per_sec": 0, 00:10:39.721 "r_mbytes_per_sec": 0, 00:10:39.721 "w_mbytes_per_sec": 0 00:10:39.721 }, 00:10:39.721 "claimed": false, 00:10:39.721 "zoned": false, 00:10:39.721 "supported_io_types": { 00:10:39.721 "read": true, 00:10:39.721 "write": true, 00:10:39.721 "unmap": true, 00:10:39.721 "flush": true, 00:10:39.721 "reset": true, 00:10:39.721 "nvme_admin": false, 00:10:39.721 "nvme_io": false, 00:10:39.721 "nvme_io_md": false, 00:10:39.721 "write_zeroes": true, 00:10:39.721 "zcopy": false, 00:10:39.721 "get_zone_info": false, 00:10:39.721 "zone_management": false, 00:10:39.721 "zone_append": false, 00:10:39.721 "compare": false, 00:10:39.721 "compare_and_write": false, 00:10:39.721 "abort": false, 00:10:39.721 "seek_hole": false, 00:10:39.721 "seek_data": false, 00:10:39.721 "copy": false, 00:10:39.721 "nvme_iov_md": false 00:10:39.721 }, 00:10:39.721 "memory_domains": [ 00:10:39.721 { 00:10:39.721 "dma_device_id": "system", 00:10:39.721 "dma_device_type": 1 00:10:39.721 }, 00:10:39.721 { 00:10:39.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.721 "dma_device_type": 2 00:10:39.721 }, 00:10:39.721 { 00:10:39.721 "dma_device_id": "system", 00:10:39.721 "dma_device_type": 1 00:10:39.721 }, 00:10:39.721 { 00:10:39.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.721 "dma_device_type": 2 00:10:39.721 }, 00:10:39.721 { 00:10:39.721 "dma_device_id": "system", 00:10:39.721 "dma_device_type": 1 00:10:39.721 }, 00:10:39.721 { 00:10:39.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.721 "dma_device_type": 2 00:10:39.721 } 00:10:39.721 ], 00:10:39.721 "driver_specific": { 00:10:39.721 "raid": { 00:10:39.721 "uuid": "f38c2c1f-1df8-41e3-ad98-96e2bfd0acc6", 00:10:39.721 "strip_size_kb": 64, 00:10:39.721 "state": "online", 00:10:39.721 "raid_level": "concat", 00:10:39.721 "superblock": false, 00:10:39.721 "num_base_bdevs": 3, 00:10:39.721 "num_base_bdevs_discovered": 3, 00:10:39.721 "num_base_bdevs_operational": 3, 00:10:39.721 "base_bdevs_list": [ 00:10:39.721 { 00:10:39.721 "name": "BaseBdev1", 00:10:39.721 "uuid": "358ac6f8-3fa4-413e-882e-ed0967bb16bc", 00:10:39.721 "is_configured": true, 00:10:39.721 "data_offset": 0, 00:10:39.721 "data_size": 65536 00:10:39.721 }, 00:10:39.721 { 00:10:39.721 "name": "BaseBdev2", 00:10:39.721 "uuid": "9378fd35-08fb-4438-8aa4-5d93233271be", 00:10:39.721 "is_configured": true, 00:10:39.721 "data_offset": 0, 00:10:39.721 "data_size": 65536 00:10:39.721 }, 00:10:39.721 { 00:10:39.721 "name": "BaseBdev3", 00:10:39.721 "uuid": "c44f39c3-a567-4012-b0e1-260f60452fec", 00:10:39.721 "is_configured": true, 00:10:39.721 "data_offset": 0, 00:10:39.721 "data_size": 65536 00:10:39.721 } 00:10:39.721 ] 00:10:39.721 } 00:10:39.721 } 00:10:39.721 }' 00:10:39.721 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:39.721 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:39.721 BaseBdev2 00:10:39.721 BaseBdev3' 00:10:39.721 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.721 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:39.721 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.721 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:39.721 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.721 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.721 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.721 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.721 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.721 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.721 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.721 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:39.721 04:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.721 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.721 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.721 04:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.721 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.721 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.721 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.721 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.721 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:39.721 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.721 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.721 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.721 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.721 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.722 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:39.722 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.722 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.722 [2024-12-06 04:01:33.054520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:39.722 [2024-12-06 04:01:33.054592] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.722 [2024-12-06 04:01:33.054676] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.981 "name": "Existed_Raid", 00:10:39.981 "uuid": "f38c2c1f-1df8-41e3-ad98-96e2bfd0acc6", 00:10:39.981 "strip_size_kb": 64, 00:10:39.981 "state": "offline", 00:10:39.981 "raid_level": "concat", 00:10:39.981 "superblock": false, 00:10:39.981 "num_base_bdevs": 3, 00:10:39.981 "num_base_bdevs_discovered": 2, 00:10:39.981 "num_base_bdevs_operational": 2, 00:10:39.981 "base_bdevs_list": [ 00:10:39.981 { 00:10:39.981 "name": null, 00:10:39.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.981 "is_configured": false, 00:10:39.981 "data_offset": 0, 00:10:39.981 "data_size": 65536 00:10:39.981 }, 00:10:39.981 { 00:10:39.981 "name": "BaseBdev2", 00:10:39.981 "uuid": "9378fd35-08fb-4438-8aa4-5d93233271be", 00:10:39.981 "is_configured": true, 00:10:39.981 "data_offset": 0, 00:10:39.981 "data_size": 65536 00:10:39.981 }, 00:10:39.981 { 00:10:39.981 "name": "BaseBdev3", 00:10:39.981 "uuid": "c44f39c3-a567-4012-b0e1-260f60452fec", 00:10:39.981 "is_configured": true, 00:10:39.981 "data_offset": 0, 00:10:39.981 "data_size": 65536 00:10:39.981 } 00:10:39.981 ] 00:10:39.981 }' 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.981 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.551 [2024-12-06 04:01:33.651477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.551 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.551 [2024-12-06 04:01:33.812960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:40.551 [2024-12-06 04:01:33.813079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:40.811 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.811 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:40.811 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:40.811 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.811 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:40.811 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.811 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.811 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.811 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:40.811 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:40.811 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:40.811 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:40.811 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:40.811 04:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:40.811 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.811 04:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.811 BaseBdev2 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.811 [ 00:10:40.811 { 00:10:40.811 "name": "BaseBdev2", 00:10:40.811 "aliases": [ 00:10:40.811 "86e2c894-2ae0-4a10-8170-8b8aa5bc50b0" 00:10:40.811 ], 00:10:40.811 "product_name": "Malloc disk", 00:10:40.811 "block_size": 512, 00:10:40.811 "num_blocks": 65536, 00:10:40.811 "uuid": "86e2c894-2ae0-4a10-8170-8b8aa5bc50b0", 00:10:40.811 "assigned_rate_limits": { 00:10:40.811 "rw_ios_per_sec": 0, 00:10:40.811 "rw_mbytes_per_sec": 0, 00:10:40.811 "r_mbytes_per_sec": 0, 00:10:40.811 "w_mbytes_per_sec": 0 00:10:40.811 }, 00:10:40.811 "claimed": false, 00:10:40.811 "zoned": false, 00:10:40.811 "supported_io_types": { 00:10:40.811 "read": true, 00:10:40.811 "write": true, 00:10:40.811 "unmap": true, 00:10:40.811 "flush": true, 00:10:40.811 "reset": true, 00:10:40.811 "nvme_admin": false, 00:10:40.811 "nvme_io": false, 00:10:40.811 "nvme_io_md": false, 00:10:40.811 "write_zeroes": true, 00:10:40.811 "zcopy": true, 00:10:40.811 "get_zone_info": false, 00:10:40.811 "zone_management": false, 00:10:40.811 "zone_append": false, 00:10:40.811 "compare": false, 00:10:40.811 "compare_and_write": false, 00:10:40.811 "abort": true, 00:10:40.811 "seek_hole": false, 00:10:40.811 "seek_data": false, 00:10:40.811 "copy": true, 00:10:40.811 "nvme_iov_md": false 00:10:40.811 }, 00:10:40.811 "memory_domains": [ 00:10:40.811 { 00:10:40.811 "dma_device_id": "system", 00:10:40.811 "dma_device_type": 1 00:10:40.811 }, 00:10:40.811 { 00:10:40.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.811 "dma_device_type": 2 00:10:40.811 } 00:10:40.811 ], 00:10:40.811 "driver_specific": {} 00:10:40.811 } 00:10:40.811 ] 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.811 BaseBdev3 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.811 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.812 [ 00:10:40.812 { 00:10:40.812 "name": "BaseBdev3", 00:10:40.812 "aliases": [ 00:10:40.812 "c5a7b2cc-8d15-48ad-a3d2-aa0a62f033d9" 00:10:40.812 ], 00:10:40.812 "product_name": "Malloc disk", 00:10:40.812 "block_size": 512, 00:10:40.812 "num_blocks": 65536, 00:10:40.812 "uuid": "c5a7b2cc-8d15-48ad-a3d2-aa0a62f033d9", 00:10:40.812 "assigned_rate_limits": { 00:10:40.812 "rw_ios_per_sec": 0, 00:10:40.812 "rw_mbytes_per_sec": 0, 00:10:40.812 "r_mbytes_per_sec": 0, 00:10:40.812 "w_mbytes_per_sec": 0 00:10:40.812 }, 00:10:40.812 "claimed": false, 00:10:40.812 "zoned": false, 00:10:40.812 "supported_io_types": { 00:10:40.812 "read": true, 00:10:40.812 "write": true, 00:10:40.812 "unmap": true, 00:10:40.812 "flush": true, 00:10:40.812 "reset": true, 00:10:40.812 "nvme_admin": false, 00:10:40.812 "nvme_io": false, 00:10:40.812 "nvme_io_md": false, 00:10:40.812 "write_zeroes": true, 00:10:40.812 "zcopy": true, 00:10:40.812 "get_zone_info": false, 00:10:40.812 "zone_management": false, 00:10:40.812 "zone_append": false, 00:10:40.812 "compare": false, 00:10:40.812 "compare_and_write": false, 00:10:40.812 "abort": true, 00:10:40.812 "seek_hole": false, 00:10:40.812 "seek_data": false, 00:10:40.812 "copy": true, 00:10:40.812 "nvme_iov_md": false 00:10:40.812 }, 00:10:40.812 "memory_domains": [ 00:10:40.812 { 00:10:40.812 "dma_device_id": "system", 00:10:40.812 "dma_device_type": 1 00:10:40.812 }, 00:10:40.812 { 00:10:40.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.812 "dma_device_type": 2 00:10:40.812 } 00:10:40.812 ], 00:10:40.812 "driver_specific": {} 00:10:40.812 } 00:10:40.812 ] 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.812 [2024-12-06 04:01:34.113181] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:40.812 [2024-12-06 04:01:34.113282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:40.812 [2024-12-06 04:01:34.113350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.812 [2024-12-06 04:01:34.115489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.812 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.072 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.072 "name": "Existed_Raid", 00:10:41.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.072 "strip_size_kb": 64, 00:10:41.072 "state": "configuring", 00:10:41.072 "raid_level": "concat", 00:10:41.072 "superblock": false, 00:10:41.072 "num_base_bdevs": 3, 00:10:41.072 "num_base_bdevs_discovered": 2, 00:10:41.072 "num_base_bdevs_operational": 3, 00:10:41.072 "base_bdevs_list": [ 00:10:41.072 { 00:10:41.072 "name": "BaseBdev1", 00:10:41.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.072 "is_configured": false, 00:10:41.072 "data_offset": 0, 00:10:41.072 "data_size": 0 00:10:41.072 }, 00:10:41.072 { 00:10:41.072 "name": "BaseBdev2", 00:10:41.072 "uuid": "86e2c894-2ae0-4a10-8170-8b8aa5bc50b0", 00:10:41.072 "is_configured": true, 00:10:41.072 "data_offset": 0, 00:10:41.072 "data_size": 65536 00:10:41.072 }, 00:10:41.072 { 00:10:41.072 "name": "BaseBdev3", 00:10:41.072 "uuid": "c5a7b2cc-8d15-48ad-a3d2-aa0a62f033d9", 00:10:41.072 "is_configured": true, 00:10:41.072 "data_offset": 0, 00:10:41.072 "data_size": 65536 00:10:41.072 } 00:10:41.072 ] 00:10:41.072 }' 00:10:41.072 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.072 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.341 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:41.341 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.341 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.341 [2024-12-06 04:01:34.592394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:41.341 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.341 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:41.341 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.341 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.341 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.341 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.341 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.341 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.341 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.341 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.341 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.341 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.341 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.341 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.341 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.341 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.341 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.341 "name": "Existed_Raid", 00:10:41.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.341 "strip_size_kb": 64, 00:10:41.341 "state": "configuring", 00:10:41.341 "raid_level": "concat", 00:10:41.341 "superblock": false, 00:10:41.341 "num_base_bdevs": 3, 00:10:41.341 "num_base_bdevs_discovered": 1, 00:10:41.341 "num_base_bdevs_operational": 3, 00:10:41.341 "base_bdevs_list": [ 00:10:41.341 { 00:10:41.341 "name": "BaseBdev1", 00:10:41.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.341 "is_configured": false, 00:10:41.341 "data_offset": 0, 00:10:41.341 "data_size": 0 00:10:41.341 }, 00:10:41.341 { 00:10:41.341 "name": null, 00:10:41.341 "uuid": "86e2c894-2ae0-4a10-8170-8b8aa5bc50b0", 00:10:41.341 "is_configured": false, 00:10:41.341 "data_offset": 0, 00:10:41.341 "data_size": 65536 00:10:41.341 }, 00:10:41.341 { 00:10:41.341 "name": "BaseBdev3", 00:10:41.341 "uuid": "c5a7b2cc-8d15-48ad-a3d2-aa0a62f033d9", 00:10:41.341 "is_configured": true, 00:10:41.341 "data_offset": 0, 00:10:41.341 "data_size": 65536 00:10:41.341 } 00:10:41.341 ] 00:10:41.341 }' 00:10:41.341 04:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.341 04:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.925 [2024-12-06 04:01:35.148358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:41.925 BaseBdev1 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.925 [ 00:10:41.925 { 00:10:41.925 "name": "BaseBdev1", 00:10:41.925 "aliases": [ 00:10:41.925 "8624458f-8ef1-4cb7-9d84-9ec304d5e166" 00:10:41.925 ], 00:10:41.925 "product_name": "Malloc disk", 00:10:41.925 "block_size": 512, 00:10:41.925 "num_blocks": 65536, 00:10:41.925 "uuid": "8624458f-8ef1-4cb7-9d84-9ec304d5e166", 00:10:41.925 "assigned_rate_limits": { 00:10:41.925 "rw_ios_per_sec": 0, 00:10:41.925 "rw_mbytes_per_sec": 0, 00:10:41.925 "r_mbytes_per_sec": 0, 00:10:41.925 "w_mbytes_per_sec": 0 00:10:41.925 }, 00:10:41.925 "claimed": true, 00:10:41.925 "claim_type": "exclusive_write", 00:10:41.925 "zoned": false, 00:10:41.925 "supported_io_types": { 00:10:41.925 "read": true, 00:10:41.925 "write": true, 00:10:41.925 "unmap": true, 00:10:41.925 "flush": true, 00:10:41.925 "reset": true, 00:10:41.925 "nvme_admin": false, 00:10:41.925 "nvme_io": false, 00:10:41.925 "nvme_io_md": false, 00:10:41.925 "write_zeroes": true, 00:10:41.925 "zcopy": true, 00:10:41.925 "get_zone_info": false, 00:10:41.925 "zone_management": false, 00:10:41.925 "zone_append": false, 00:10:41.925 "compare": false, 00:10:41.925 "compare_and_write": false, 00:10:41.925 "abort": true, 00:10:41.925 "seek_hole": false, 00:10:41.925 "seek_data": false, 00:10:41.925 "copy": true, 00:10:41.925 "nvme_iov_md": false 00:10:41.925 }, 00:10:41.925 "memory_domains": [ 00:10:41.925 { 00:10:41.925 "dma_device_id": "system", 00:10:41.925 "dma_device_type": 1 00:10:41.925 }, 00:10:41.925 { 00:10:41.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.925 "dma_device_type": 2 00:10:41.925 } 00:10:41.925 ], 00:10:41.925 "driver_specific": {} 00:10:41.925 } 00:10:41.925 ] 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.925 "name": "Existed_Raid", 00:10:41.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.925 "strip_size_kb": 64, 00:10:41.925 "state": "configuring", 00:10:41.925 "raid_level": "concat", 00:10:41.925 "superblock": false, 00:10:41.925 "num_base_bdevs": 3, 00:10:41.925 "num_base_bdevs_discovered": 2, 00:10:41.925 "num_base_bdevs_operational": 3, 00:10:41.925 "base_bdevs_list": [ 00:10:41.925 { 00:10:41.925 "name": "BaseBdev1", 00:10:41.925 "uuid": "8624458f-8ef1-4cb7-9d84-9ec304d5e166", 00:10:41.925 "is_configured": true, 00:10:41.925 "data_offset": 0, 00:10:41.925 "data_size": 65536 00:10:41.925 }, 00:10:41.925 { 00:10:41.925 "name": null, 00:10:41.925 "uuid": "86e2c894-2ae0-4a10-8170-8b8aa5bc50b0", 00:10:41.925 "is_configured": false, 00:10:41.925 "data_offset": 0, 00:10:41.925 "data_size": 65536 00:10:41.925 }, 00:10:41.925 { 00:10:41.925 "name": "BaseBdev3", 00:10:41.925 "uuid": "c5a7b2cc-8d15-48ad-a3d2-aa0a62f033d9", 00:10:41.925 "is_configured": true, 00:10:41.925 "data_offset": 0, 00:10:41.925 "data_size": 65536 00:10:41.925 } 00:10:41.925 ] 00:10:41.925 }' 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.925 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.493 [2024-12-06 04:01:35.715534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.493 "name": "Existed_Raid", 00:10:42.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.493 "strip_size_kb": 64, 00:10:42.493 "state": "configuring", 00:10:42.493 "raid_level": "concat", 00:10:42.493 "superblock": false, 00:10:42.493 "num_base_bdevs": 3, 00:10:42.493 "num_base_bdevs_discovered": 1, 00:10:42.493 "num_base_bdevs_operational": 3, 00:10:42.493 "base_bdevs_list": [ 00:10:42.493 { 00:10:42.493 "name": "BaseBdev1", 00:10:42.493 "uuid": "8624458f-8ef1-4cb7-9d84-9ec304d5e166", 00:10:42.493 "is_configured": true, 00:10:42.493 "data_offset": 0, 00:10:42.493 "data_size": 65536 00:10:42.493 }, 00:10:42.493 { 00:10:42.493 "name": null, 00:10:42.493 "uuid": "86e2c894-2ae0-4a10-8170-8b8aa5bc50b0", 00:10:42.493 "is_configured": false, 00:10:42.493 "data_offset": 0, 00:10:42.493 "data_size": 65536 00:10:42.493 }, 00:10:42.493 { 00:10:42.493 "name": null, 00:10:42.493 "uuid": "c5a7b2cc-8d15-48ad-a3d2-aa0a62f033d9", 00:10:42.493 "is_configured": false, 00:10:42.493 "data_offset": 0, 00:10:42.493 "data_size": 65536 00:10:42.493 } 00:10:42.493 ] 00:10:42.493 }' 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.493 04:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.062 [2024-12-06 04:01:36.234717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.062 "name": "Existed_Raid", 00:10:43.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.062 "strip_size_kb": 64, 00:10:43.062 "state": "configuring", 00:10:43.062 "raid_level": "concat", 00:10:43.062 "superblock": false, 00:10:43.062 "num_base_bdevs": 3, 00:10:43.062 "num_base_bdevs_discovered": 2, 00:10:43.062 "num_base_bdevs_operational": 3, 00:10:43.062 "base_bdevs_list": [ 00:10:43.062 { 00:10:43.062 "name": "BaseBdev1", 00:10:43.062 "uuid": "8624458f-8ef1-4cb7-9d84-9ec304d5e166", 00:10:43.062 "is_configured": true, 00:10:43.062 "data_offset": 0, 00:10:43.062 "data_size": 65536 00:10:43.062 }, 00:10:43.062 { 00:10:43.062 "name": null, 00:10:43.062 "uuid": "86e2c894-2ae0-4a10-8170-8b8aa5bc50b0", 00:10:43.062 "is_configured": false, 00:10:43.062 "data_offset": 0, 00:10:43.062 "data_size": 65536 00:10:43.062 }, 00:10:43.062 { 00:10:43.062 "name": "BaseBdev3", 00:10:43.062 "uuid": "c5a7b2cc-8d15-48ad-a3d2-aa0a62f033d9", 00:10:43.062 "is_configured": true, 00:10:43.062 "data_offset": 0, 00:10:43.062 "data_size": 65536 00:10:43.062 } 00:10:43.062 ] 00:10:43.062 }' 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.062 04:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.633 [2024-12-06 04:01:36.713928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.633 "name": "Existed_Raid", 00:10:43.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.633 "strip_size_kb": 64, 00:10:43.633 "state": "configuring", 00:10:43.633 "raid_level": "concat", 00:10:43.633 "superblock": false, 00:10:43.633 "num_base_bdevs": 3, 00:10:43.633 "num_base_bdevs_discovered": 1, 00:10:43.633 "num_base_bdevs_operational": 3, 00:10:43.633 "base_bdevs_list": [ 00:10:43.633 { 00:10:43.633 "name": null, 00:10:43.633 "uuid": "8624458f-8ef1-4cb7-9d84-9ec304d5e166", 00:10:43.633 "is_configured": false, 00:10:43.633 "data_offset": 0, 00:10:43.633 "data_size": 65536 00:10:43.633 }, 00:10:43.633 { 00:10:43.633 "name": null, 00:10:43.633 "uuid": "86e2c894-2ae0-4a10-8170-8b8aa5bc50b0", 00:10:43.633 "is_configured": false, 00:10:43.633 "data_offset": 0, 00:10:43.633 "data_size": 65536 00:10:43.633 }, 00:10:43.633 { 00:10:43.633 "name": "BaseBdev3", 00:10:43.633 "uuid": "c5a7b2cc-8d15-48ad-a3d2-aa0a62f033d9", 00:10:43.633 "is_configured": true, 00:10:43.633 "data_offset": 0, 00:10:43.633 "data_size": 65536 00:10:43.633 } 00:10:43.633 ] 00:10:43.633 }' 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.633 04:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.893 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:43.893 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.152 [2024-12-06 04:01:37.274816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.152 "name": "Existed_Raid", 00:10:44.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.152 "strip_size_kb": 64, 00:10:44.152 "state": "configuring", 00:10:44.152 "raid_level": "concat", 00:10:44.152 "superblock": false, 00:10:44.152 "num_base_bdevs": 3, 00:10:44.152 "num_base_bdevs_discovered": 2, 00:10:44.152 "num_base_bdevs_operational": 3, 00:10:44.152 "base_bdevs_list": [ 00:10:44.152 { 00:10:44.152 "name": null, 00:10:44.152 "uuid": "8624458f-8ef1-4cb7-9d84-9ec304d5e166", 00:10:44.152 "is_configured": false, 00:10:44.152 "data_offset": 0, 00:10:44.152 "data_size": 65536 00:10:44.152 }, 00:10:44.152 { 00:10:44.152 "name": "BaseBdev2", 00:10:44.152 "uuid": "86e2c894-2ae0-4a10-8170-8b8aa5bc50b0", 00:10:44.152 "is_configured": true, 00:10:44.152 "data_offset": 0, 00:10:44.152 "data_size": 65536 00:10:44.152 }, 00:10:44.152 { 00:10:44.152 "name": "BaseBdev3", 00:10:44.152 "uuid": "c5a7b2cc-8d15-48ad-a3d2-aa0a62f033d9", 00:10:44.152 "is_configured": true, 00:10:44.152 "data_offset": 0, 00:10:44.152 "data_size": 65536 00:10:44.152 } 00:10:44.152 ] 00:10:44.152 }' 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.152 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.411 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.411 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:44.411 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.411 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.411 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8624458f-8ef1-4cb7-9d84-9ec304d5e166 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.671 [2024-12-06 04:01:37.867494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:44.671 [2024-12-06 04:01:37.867542] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:44.671 [2024-12-06 04:01:37.867551] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:44.671 [2024-12-06 04:01:37.867799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:44.671 [2024-12-06 04:01:37.867952] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:44.671 [2024-12-06 04:01:37.867970] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:44.671 [2024-12-06 04:01:37.868239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.671 NewBaseBdev 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.671 [ 00:10:44.671 { 00:10:44.671 "name": "NewBaseBdev", 00:10:44.671 "aliases": [ 00:10:44.671 "8624458f-8ef1-4cb7-9d84-9ec304d5e166" 00:10:44.671 ], 00:10:44.671 "product_name": "Malloc disk", 00:10:44.671 "block_size": 512, 00:10:44.671 "num_blocks": 65536, 00:10:44.671 "uuid": "8624458f-8ef1-4cb7-9d84-9ec304d5e166", 00:10:44.671 "assigned_rate_limits": { 00:10:44.671 "rw_ios_per_sec": 0, 00:10:44.671 "rw_mbytes_per_sec": 0, 00:10:44.671 "r_mbytes_per_sec": 0, 00:10:44.671 "w_mbytes_per_sec": 0 00:10:44.671 }, 00:10:44.671 "claimed": true, 00:10:44.671 "claim_type": "exclusive_write", 00:10:44.671 "zoned": false, 00:10:44.671 "supported_io_types": { 00:10:44.671 "read": true, 00:10:44.671 "write": true, 00:10:44.671 "unmap": true, 00:10:44.671 "flush": true, 00:10:44.671 "reset": true, 00:10:44.671 "nvme_admin": false, 00:10:44.671 "nvme_io": false, 00:10:44.671 "nvme_io_md": false, 00:10:44.671 "write_zeroes": true, 00:10:44.671 "zcopy": true, 00:10:44.671 "get_zone_info": false, 00:10:44.671 "zone_management": false, 00:10:44.671 "zone_append": false, 00:10:44.671 "compare": false, 00:10:44.671 "compare_and_write": false, 00:10:44.671 "abort": true, 00:10:44.671 "seek_hole": false, 00:10:44.671 "seek_data": false, 00:10:44.671 "copy": true, 00:10:44.671 "nvme_iov_md": false 00:10:44.671 }, 00:10:44.671 "memory_domains": [ 00:10:44.671 { 00:10:44.671 "dma_device_id": "system", 00:10:44.671 "dma_device_type": 1 00:10:44.671 }, 00:10:44.671 { 00:10:44.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.671 "dma_device_type": 2 00:10:44.671 } 00:10:44.671 ], 00:10:44.671 "driver_specific": {} 00:10:44.671 } 00:10:44.671 ] 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.671 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.672 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.672 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.672 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.672 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.672 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.672 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.672 "name": "Existed_Raid", 00:10:44.672 "uuid": "708fa3a7-3f9c-4e91-966a-250e874bd793", 00:10:44.672 "strip_size_kb": 64, 00:10:44.672 "state": "online", 00:10:44.672 "raid_level": "concat", 00:10:44.672 "superblock": false, 00:10:44.672 "num_base_bdevs": 3, 00:10:44.672 "num_base_bdevs_discovered": 3, 00:10:44.672 "num_base_bdevs_operational": 3, 00:10:44.672 "base_bdevs_list": [ 00:10:44.672 { 00:10:44.672 "name": "NewBaseBdev", 00:10:44.672 "uuid": "8624458f-8ef1-4cb7-9d84-9ec304d5e166", 00:10:44.672 "is_configured": true, 00:10:44.672 "data_offset": 0, 00:10:44.672 "data_size": 65536 00:10:44.672 }, 00:10:44.672 { 00:10:44.672 "name": "BaseBdev2", 00:10:44.672 "uuid": "86e2c894-2ae0-4a10-8170-8b8aa5bc50b0", 00:10:44.672 "is_configured": true, 00:10:44.672 "data_offset": 0, 00:10:44.672 "data_size": 65536 00:10:44.672 }, 00:10:44.672 { 00:10:44.672 "name": "BaseBdev3", 00:10:44.672 "uuid": "c5a7b2cc-8d15-48ad-a3d2-aa0a62f033d9", 00:10:44.672 "is_configured": true, 00:10:44.672 "data_offset": 0, 00:10:44.672 "data_size": 65536 00:10:44.672 } 00:10:44.672 ] 00:10:44.672 }' 00:10:44.672 04:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.672 04:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:45.243 [2024-12-06 04:01:38.327079] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:45.243 "name": "Existed_Raid", 00:10:45.243 "aliases": [ 00:10:45.243 "708fa3a7-3f9c-4e91-966a-250e874bd793" 00:10:45.243 ], 00:10:45.243 "product_name": "Raid Volume", 00:10:45.243 "block_size": 512, 00:10:45.243 "num_blocks": 196608, 00:10:45.243 "uuid": "708fa3a7-3f9c-4e91-966a-250e874bd793", 00:10:45.243 "assigned_rate_limits": { 00:10:45.243 "rw_ios_per_sec": 0, 00:10:45.243 "rw_mbytes_per_sec": 0, 00:10:45.243 "r_mbytes_per_sec": 0, 00:10:45.243 "w_mbytes_per_sec": 0 00:10:45.243 }, 00:10:45.243 "claimed": false, 00:10:45.243 "zoned": false, 00:10:45.243 "supported_io_types": { 00:10:45.243 "read": true, 00:10:45.243 "write": true, 00:10:45.243 "unmap": true, 00:10:45.243 "flush": true, 00:10:45.243 "reset": true, 00:10:45.243 "nvme_admin": false, 00:10:45.243 "nvme_io": false, 00:10:45.243 "nvme_io_md": false, 00:10:45.243 "write_zeroes": true, 00:10:45.243 "zcopy": false, 00:10:45.243 "get_zone_info": false, 00:10:45.243 "zone_management": false, 00:10:45.243 "zone_append": false, 00:10:45.243 "compare": false, 00:10:45.243 "compare_and_write": false, 00:10:45.243 "abort": false, 00:10:45.243 "seek_hole": false, 00:10:45.243 "seek_data": false, 00:10:45.243 "copy": false, 00:10:45.243 "nvme_iov_md": false 00:10:45.243 }, 00:10:45.243 "memory_domains": [ 00:10:45.243 { 00:10:45.243 "dma_device_id": "system", 00:10:45.243 "dma_device_type": 1 00:10:45.243 }, 00:10:45.243 { 00:10:45.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.243 "dma_device_type": 2 00:10:45.243 }, 00:10:45.243 { 00:10:45.243 "dma_device_id": "system", 00:10:45.243 "dma_device_type": 1 00:10:45.243 }, 00:10:45.243 { 00:10:45.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.243 "dma_device_type": 2 00:10:45.243 }, 00:10:45.243 { 00:10:45.243 "dma_device_id": "system", 00:10:45.243 "dma_device_type": 1 00:10:45.243 }, 00:10:45.243 { 00:10:45.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.243 "dma_device_type": 2 00:10:45.243 } 00:10:45.243 ], 00:10:45.243 "driver_specific": { 00:10:45.243 "raid": { 00:10:45.243 "uuid": "708fa3a7-3f9c-4e91-966a-250e874bd793", 00:10:45.243 "strip_size_kb": 64, 00:10:45.243 "state": "online", 00:10:45.243 "raid_level": "concat", 00:10:45.243 "superblock": false, 00:10:45.243 "num_base_bdevs": 3, 00:10:45.243 "num_base_bdevs_discovered": 3, 00:10:45.243 "num_base_bdevs_operational": 3, 00:10:45.243 "base_bdevs_list": [ 00:10:45.243 { 00:10:45.243 "name": "NewBaseBdev", 00:10:45.243 "uuid": "8624458f-8ef1-4cb7-9d84-9ec304d5e166", 00:10:45.243 "is_configured": true, 00:10:45.243 "data_offset": 0, 00:10:45.243 "data_size": 65536 00:10:45.243 }, 00:10:45.243 { 00:10:45.243 "name": "BaseBdev2", 00:10:45.243 "uuid": "86e2c894-2ae0-4a10-8170-8b8aa5bc50b0", 00:10:45.243 "is_configured": true, 00:10:45.243 "data_offset": 0, 00:10:45.243 "data_size": 65536 00:10:45.243 }, 00:10:45.243 { 00:10:45.243 "name": "BaseBdev3", 00:10:45.243 "uuid": "c5a7b2cc-8d15-48ad-a3d2-aa0a62f033d9", 00:10:45.243 "is_configured": true, 00:10:45.243 "data_offset": 0, 00:10:45.243 "data_size": 65536 00:10:45.243 } 00:10:45.243 ] 00:10:45.243 } 00:10:45.243 } 00:10:45.243 }' 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:45.243 BaseBdev2 00:10:45.243 BaseBdev3' 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.243 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.244 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.244 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.244 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:45.244 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.244 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.244 [2024-12-06 04:01:38.550394] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:45.244 [2024-12-06 04:01:38.550427] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.244 [2024-12-06 04:01:38.550506] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.244 [2024-12-06 04:01:38.550568] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:45.244 [2024-12-06 04:01:38.550582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:45.244 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.244 04:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65694 00:10:45.244 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65694 ']' 00:10:45.244 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65694 00:10:45.244 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:45.244 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.244 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65694 00:10:45.244 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:45.244 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:45.244 killing process with pid 65694 00:10:45.244 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65694' 00:10:45.244 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65694 00:10:45.244 [2024-12-06 04:01:38.591880] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:45.244 04:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65694 00:10:45.815 [2024-12-06 04:01:38.900391] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:46.754 04:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:46.754 00:10:46.754 real 0m10.661s 00:10:46.754 user 0m16.937s 00:10:46.754 sys 0m1.833s 00:10:46.754 04:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.754 04:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.754 ************************************ 00:10:46.754 END TEST raid_state_function_test 00:10:46.754 ************************************ 00:10:47.023 04:01:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:10:47.023 04:01:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:47.023 04:01:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.023 04:01:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:47.023 ************************************ 00:10:47.023 START TEST raid_state_function_test_sb 00:10:47.023 ************************************ 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66322 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66322' 00:10:47.023 Process raid pid: 66322 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66322 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66322 ']' 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.023 04:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.024 04:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.024 [2024-12-06 04:01:40.234699] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:10:47.024 [2024-12-06 04:01:40.234817] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.298 [2024-12-06 04:01:40.410547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.298 [2024-12-06 04:01:40.525654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.558 [2024-12-06 04:01:40.728311] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.558 [2024-12-06 04:01:40.728355] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.819 [2024-12-06 04:01:41.098131] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:47.819 [2024-12-06 04:01:41.098191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:47.819 [2024-12-06 04:01:41.098207] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:47.819 [2024-12-06 04:01:41.098224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:47.819 [2024-12-06 04:01:41.098236] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:47.819 [2024-12-06 04:01:41.098251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.819 "name": "Existed_Raid", 00:10:47.819 "uuid": "c8561b2c-0b5d-4aa7-9b2d-b823226d0562", 00:10:47.819 "strip_size_kb": 64, 00:10:47.819 "state": "configuring", 00:10:47.819 "raid_level": "concat", 00:10:47.819 "superblock": true, 00:10:47.819 "num_base_bdevs": 3, 00:10:47.819 "num_base_bdevs_discovered": 0, 00:10:47.819 "num_base_bdevs_operational": 3, 00:10:47.819 "base_bdevs_list": [ 00:10:47.819 { 00:10:47.819 "name": "BaseBdev1", 00:10:47.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.819 "is_configured": false, 00:10:47.819 "data_offset": 0, 00:10:47.819 "data_size": 0 00:10:47.819 }, 00:10:47.819 { 00:10:47.819 "name": "BaseBdev2", 00:10:47.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.819 "is_configured": false, 00:10:47.819 "data_offset": 0, 00:10:47.819 "data_size": 0 00:10:47.819 }, 00:10:47.819 { 00:10:47.819 "name": "BaseBdev3", 00:10:47.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.819 "is_configured": false, 00:10:47.819 "data_offset": 0, 00:10:47.819 "data_size": 0 00:10:47.819 } 00:10:47.819 ] 00:10:47.819 }' 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.819 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.388 [2024-12-06 04:01:41.557233] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:48.388 [2024-12-06 04:01:41.557313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.388 [2024-12-06 04:01:41.569225] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:48.388 [2024-12-06 04:01:41.569325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:48.388 [2024-12-06 04:01:41.569369] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:48.388 [2024-12-06 04:01:41.569406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:48.388 [2024-12-06 04:01:41.569461] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:48.388 [2024-12-06 04:01:41.569500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.388 [2024-12-06 04:01:41.616389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:48.388 BaseBdev1 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.388 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.388 [ 00:10:48.388 { 00:10:48.388 "name": "BaseBdev1", 00:10:48.389 "aliases": [ 00:10:48.389 "6f8edc9c-35a9-4ad9-9039-ed1d43c666e6" 00:10:48.389 ], 00:10:48.389 "product_name": "Malloc disk", 00:10:48.389 "block_size": 512, 00:10:48.389 "num_blocks": 65536, 00:10:48.389 "uuid": "6f8edc9c-35a9-4ad9-9039-ed1d43c666e6", 00:10:48.389 "assigned_rate_limits": { 00:10:48.389 "rw_ios_per_sec": 0, 00:10:48.389 "rw_mbytes_per_sec": 0, 00:10:48.389 "r_mbytes_per_sec": 0, 00:10:48.389 "w_mbytes_per_sec": 0 00:10:48.389 }, 00:10:48.389 "claimed": true, 00:10:48.389 "claim_type": "exclusive_write", 00:10:48.389 "zoned": false, 00:10:48.389 "supported_io_types": { 00:10:48.389 "read": true, 00:10:48.389 "write": true, 00:10:48.389 "unmap": true, 00:10:48.389 "flush": true, 00:10:48.389 "reset": true, 00:10:48.389 "nvme_admin": false, 00:10:48.389 "nvme_io": false, 00:10:48.389 "nvme_io_md": false, 00:10:48.389 "write_zeroes": true, 00:10:48.389 "zcopy": true, 00:10:48.389 "get_zone_info": false, 00:10:48.389 "zone_management": false, 00:10:48.389 "zone_append": false, 00:10:48.389 "compare": false, 00:10:48.389 "compare_and_write": false, 00:10:48.389 "abort": true, 00:10:48.389 "seek_hole": false, 00:10:48.389 "seek_data": false, 00:10:48.389 "copy": true, 00:10:48.389 "nvme_iov_md": false 00:10:48.389 }, 00:10:48.389 "memory_domains": [ 00:10:48.389 { 00:10:48.389 "dma_device_id": "system", 00:10:48.389 "dma_device_type": 1 00:10:48.389 }, 00:10:48.389 { 00:10:48.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.389 "dma_device_type": 2 00:10:48.389 } 00:10:48.389 ], 00:10:48.389 "driver_specific": {} 00:10:48.389 } 00:10:48.389 ] 00:10:48.389 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.389 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:48.389 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:48.389 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.389 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.389 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.389 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.389 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.389 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.389 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.389 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.389 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.389 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.389 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.389 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.389 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.389 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.389 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.389 "name": "Existed_Raid", 00:10:48.389 "uuid": "7727518a-a403-40fb-b7f3-6624823be9db", 00:10:48.389 "strip_size_kb": 64, 00:10:48.389 "state": "configuring", 00:10:48.389 "raid_level": "concat", 00:10:48.389 "superblock": true, 00:10:48.389 "num_base_bdevs": 3, 00:10:48.389 "num_base_bdevs_discovered": 1, 00:10:48.389 "num_base_bdevs_operational": 3, 00:10:48.389 "base_bdevs_list": [ 00:10:48.389 { 00:10:48.389 "name": "BaseBdev1", 00:10:48.389 "uuid": "6f8edc9c-35a9-4ad9-9039-ed1d43c666e6", 00:10:48.389 "is_configured": true, 00:10:48.389 "data_offset": 2048, 00:10:48.389 "data_size": 63488 00:10:48.389 }, 00:10:48.389 { 00:10:48.389 "name": "BaseBdev2", 00:10:48.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.389 "is_configured": false, 00:10:48.389 "data_offset": 0, 00:10:48.389 "data_size": 0 00:10:48.389 }, 00:10:48.389 { 00:10:48.389 "name": "BaseBdev3", 00:10:48.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.389 "is_configured": false, 00:10:48.389 "data_offset": 0, 00:10:48.389 "data_size": 0 00:10:48.389 } 00:10:48.389 ] 00:10:48.389 }' 00:10:48.389 04:01:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.389 04:01:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.959 [2024-12-06 04:01:42.055817] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:48.959 [2024-12-06 04:01:42.055915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.959 [2024-12-06 04:01:42.067858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:48.959 [2024-12-06 04:01:42.069926] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:48.959 [2024-12-06 04:01:42.070009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:48.959 [2024-12-06 04:01:42.070060] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:48.959 [2024-12-06 04:01:42.070097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.959 "name": "Existed_Raid", 00:10:48.959 "uuid": "8e130f13-e1e4-4cd5-89f5-8d99c61c9802", 00:10:48.959 "strip_size_kb": 64, 00:10:48.959 "state": "configuring", 00:10:48.959 "raid_level": "concat", 00:10:48.959 "superblock": true, 00:10:48.959 "num_base_bdevs": 3, 00:10:48.959 "num_base_bdevs_discovered": 1, 00:10:48.959 "num_base_bdevs_operational": 3, 00:10:48.959 "base_bdevs_list": [ 00:10:48.959 { 00:10:48.959 "name": "BaseBdev1", 00:10:48.959 "uuid": "6f8edc9c-35a9-4ad9-9039-ed1d43c666e6", 00:10:48.959 "is_configured": true, 00:10:48.959 "data_offset": 2048, 00:10:48.959 "data_size": 63488 00:10:48.959 }, 00:10:48.959 { 00:10:48.959 "name": "BaseBdev2", 00:10:48.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.959 "is_configured": false, 00:10:48.959 "data_offset": 0, 00:10:48.959 "data_size": 0 00:10:48.959 }, 00:10:48.959 { 00:10:48.959 "name": "BaseBdev3", 00:10:48.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.959 "is_configured": false, 00:10:48.959 "data_offset": 0, 00:10:48.959 "data_size": 0 00:10:48.959 } 00:10:48.959 ] 00:10:48.959 }' 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.959 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.219 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:49.219 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.219 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.219 [2024-12-06 04:01:42.529696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.219 BaseBdev2 00:10:49.219 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.219 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:49.219 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:49.219 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.219 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.219 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.219 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.219 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.219 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.219 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.219 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.219 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:49.219 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.219 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.219 [ 00:10:49.219 { 00:10:49.219 "name": "BaseBdev2", 00:10:49.219 "aliases": [ 00:10:49.219 "48066a70-f5b9-4fad-be33-adf5746b6c55" 00:10:49.219 ], 00:10:49.219 "product_name": "Malloc disk", 00:10:49.219 "block_size": 512, 00:10:49.220 "num_blocks": 65536, 00:10:49.220 "uuid": "48066a70-f5b9-4fad-be33-adf5746b6c55", 00:10:49.220 "assigned_rate_limits": { 00:10:49.220 "rw_ios_per_sec": 0, 00:10:49.220 "rw_mbytes_per_sec": 0, 00:10:49.220 "r_mbytes_per_sec": 0, 00:10:49.220 "w_mbytes_per_sec": 0 00:10:49.220 }, 00:10:49.220 "claimed": true, 00:10:49.220 "claim_type": "exclusive_write", 00:10:49.220 "zoned": false, 00:10:49.220 "supported_io_types": { 00:10:49.220 "read": true, 00:10:49.220 "write": true, 00:10:49.220 "unmap": true, 00:10:49.220 "flush": true, 00:10:49.220 "reset": true, 00:10:49.220 "nvme_admin": false, 00:10:49.220 "nvme_io": false, 00:10:49.220 "nvme_io_md": false, 00:10:49.220 "write_zeroes": true, 00:10:49.220 "zcopy": true, 00:10:49.220 "get_zone_info": false, 00:10:49.220 "zone_management": false, 00:10:49.220 "zone_append": false, 00:10:49.220 "compare": false, 00:10:49.220 "compare_and_write": false, 00:10:49.220 "abort": true, 00:10:49.220 "seek_hole": false, 00:10:49.220 "seek_data": false, 00:10:49.220 "copy": true, 00:10:49.220 "nvme_iov_md": false 00:10:49.220 }, 00:10:49.220 "memory_domains": [ 00:10:49.220 { 00:10:49.220 "dma_device_id": "system", 00:10:49.220 "dma_device_type": 1 00:10:49.220 }, 00:10:49.220 { 00:10:49.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.220 "dma_device_type": 2 00:10:49.220 } 00:10:49.220 ], 00:10:49.220 "driver_specific": {} 00:10:49.220 } 00:10:49.220 ] 00:10:49.220 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.220 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.220 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:49.220 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:49.220 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:49.220 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.220 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.220 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.220 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.220 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.220 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.220 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.220 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.220 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.479 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.479 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.479 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.479 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.480 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.480 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.480 "name": "Existed_Raid", 00:10:49.480 "uuid": "8e130f13-e1e4-4cd5-89f5-8d99c61c9802", 00:10:49.480 "strip_size_kb": 64, 00:10:49.480 "state": "configuring", 00:10:49.480 "raid_level": "concat", 00:10:49.480 "superblock": true, 00:10:49.480 "num_base_bdevs": 3, 00:10:49.480 "num_base_bdevs_discovered": 2, 00:10:49.480 "num_base_bdevs_operational": 3, 00:10:49.480 "base_bdevs_list": [ 00:10:49.480 { 00:10:49.480 "name": "BaseBdev1", 00:10:49.480 "uuid": "6f8edc9c-35a9-4ad9-9039-ed1d43c666e6", 00:10:49.480 "is_configured": true, 00:10:49.480 "data_offset": 2048, 00:10:49.480 "data_size": 63488 00:10:49.480 }, 00:10:49.480 { 00:10:49.480 "name": "BaseBdev2", 00:10:49.480 "uuid": "48066a70-f5b9-4fad-be33-adf5746b6c55", 00:10:49.480 "is_configured": true, 00:10:49.480 "data_offset": 2048, 00:10:49.480 "data_size": 63488 00:10:49.480 }, 00:10:49.480 { 00:10:49.480 "name": "BaseBdev3", 00:10:49.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.480 "is_configured": false, 00:10:49.480 "data_offset": 0, 00:10:49.480 "data_size": 0 00:10:49.480 } 00:10:49.480 ] 00:10:49.480 }' 00:10:49.480 04:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.480 04:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.739 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:49.739 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.739 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.739 [2024-12-06 04:01:43.072755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:49.739 [2024-12-06 04:01:43.073022] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:49.739 [2024-12-06 04:01:43.073069] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:49.739 [2024-12-06 04:01:43.073380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:49.739 BaseBdev3 00:10:49.739 [2024-12-06 04:01:43.073593] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:49.739 [2024-12-06 04:01:43.073610] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:49.739 [2024-12-06 04:01:43.073779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.739 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.739 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:49.739 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:49.739 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.739 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.739 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.740 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.740 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.740 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.740 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.740 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.740 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:49.740 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.740 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.999 [ 00:10:49.999 { 00:10:49.999 "name": "BaseBdev3", 00:10:49.999 "aliases": [ 00:10:49.999 "cd57212a-4e21-43c9-97ee-9389e3e8467d" 00:10:49.999 ], 00:10:49.999 "product_name": "Malloc disk", 00:10:49.999 "block_size": 512, 00:10:49.999 "num_blocks": 65536, 00:10:49.999 "uuid": "cd57212a-4e21-43c9-97ee-9389e3e8467d", 00:10:49.999 "assigned_rate_limits": { 00:10:49.999 "rw_ios_per_sec": 0, 00:10:49.999 "rw_mbytes_per_sec": 0, 00:10:49.999 "r_mbytes_per_sec": 0, 00:10:50.000 "w_mbytes_per_sec": 0 00:10:50.000 }, 00:10:50.000 "claimed": true, 00:10:50.000 "claim_type": "exclusive_write", 00:10:50.000 "zoned": false, 00:10:50.000 "supported_io_types": { 00:10:50.000 "read": true, 00:10:50.000 "write": true, 00:10:50.000 "unmap": true, 00:10:50.000 "flush": true, 00:10:50.000 "reset": true, 00:10:50.000 "nvme_admin": false, 00:10:50.000 "nvme_io": false, 00:10:50.000 "nvme_io_md": false, 00:10:50.000 "write_zeroes": true, 00:10:50.000 "zcopy": true, 00:10:50.000 "get_zone_info": false, 00:10:50.000 "zone_management": false, 00:10:50.000 "zone_append": false, 00:10:50.000 "compare": false, 00:10:50.000 "compare_and_write": false, 00:10:50.000 "abort": true, 00:10:50.000 "seek_hole": false, 00:10:50.000 "seek_data": false, 00:10:50.000 "copy": true, 00:10:50.000 "nvme_iov_md": false 00:10:50.000 }, 00:10:50.000 "memory_domains": [ 00:10:50.000 { 00:10:50.000 "dma_device_id": "system", 00:10:50.000 "dma_device_type": 1 00:10:50.000 }, 00:10:50.000 { 00:10:50.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.000 "dma_device_type": 2 00:10:50.000 } 00:10:50.000 ], 00:10:50.000 "driver_specific": {} 00:10:50.000 } 00:10:50.000 ] 00:10:50.000 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.000 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:50.000 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:50.000 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.000 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:50.000 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.000 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.000 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.000 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.000 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.000 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.000 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.000 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.000 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.000 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.000 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.000 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.000 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.000 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.000 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.000 "name": "Existed_Raid", 00:10:50.000 "uuid": "8e130f13-e1e4-4cd5-89f5-8d99c61c9802", 00:10:50.000 "strip_size_kb": 64, 00:10:50.000 "state": "online", 00:10:50.000 "raid_level": "concat", 00:10:50.000 "superblock": true, 00:10:50.000 "num_base_bdevs": 3, 00:10:50.000 "num_base_bdevs_discovered": 3, 00:10:50.000 "num_base_bdevs_operational": 3, 00:10:50.000 "base_bdevs_list": [ 00:10:50.000 { 00:10:50.000 "name": "BaseBdev1", 00:10:50.000 "uuid": "6f8edc9c-35a9-4ad9-9039-ed1d43c666e6", 00:10:50.000 "is_configured": true, 00:10:50.000 "data_offset": 2048, 00:10:50.000 "data_size": 63488 00:10:50.000 }, 00:10:50.000 { 00:10:50.000 "name": "BaseBdev2", 00:10:50.000 "uuid": "48066a70-f5b9-4fad-be33-adf5746b6c55", 00:10:50.000 "is_configured": true, 00:10:50.000 "data_offset": 2048, 00:10:50.000 "data_size": 63488 00:10:50.000 }, 00:10:50.000 { 00:10:50.000 "name": "BaseBdev3", 00:10:50.000 "uuid": "cd57212a-4e21-43c9-97ee-9389e3e8467d", 00:10:50.000 "is_configured": true, 00:10:50.000 "data_offset": 2048, 00:10:50.000 "data_size": 63488 00:10:50.000 } 00:10:50.000 ] 00:10:50.000 }' 00:10:50.000 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.000 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.260 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:50.260 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:50.260 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:50.260 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:50.260 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:50.260 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:50.260 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:50.260 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.260 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.260 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:50.260 [2024-12-06 04:01:43.580338] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.260 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.520 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:50.520 "name": "Existed_Raid", 00:10:50.520 "aliases": [ 00:10:50.520 "8e130f13-e1e4-4cd5-89f5-8d99c61c9802" 00:10:50.520 ], 00:10:50.520 "product_name": "Raid Volume", 00:10:50.520 "block_size": 512, 00:10:50.520 "num_blocks": 190464, 00:10:50.520 "uuid": "8e130f13-e1e4-4cd5-89f5-8d99c61c9802", 00:10:50.520 "assigned_rate_limits": { 00:10:50.520 "rw_ios_per_sec": 0, 00:10:50.520 "rw_mbytes_per_sec": 0, 00:10:50.520 "r_mbytes_per_sec": 0, 00:10:50.520 "w_mbytes_per_sec": 0 00:10:50.520 }, 00:10:50.520 "claimed": false, 00:10:50.520 "zoned": false, 00:10:50.520 "supported_io_types": { 00:10:50.520 "read": true, 00:10:50.520 "write": true, 00:10:50.520 "unmap": true, 00:10:50.520 "flush": true, 00:10:50.520 "reset": true, 00:10:50.520 "nvme_admin": false, 00:10:50.520 "nvme_io": false, 00:10:50.520 "nvme_io_md": false, 00:10:50.520 "write_zeroes": true, 00:10:50.520 "zcopy": false, 00:10:50.520 "get_zone_info": false, 00:10:50.520 "zone_management": false, 00:10:50.520 "zone_append": false, 00:10:50.520 "compare": false, 00:10:50.520 "compare_and_write": false, 00:10:50.520 "abort": false, 00:10:50.520 "seek_hole": false, 00:10:50.520 "seek_data": false, 00:10:50.520 "copy": false, 00:10:50.520 "nvme_iov_md": false 00:10:50.520 }, 00:10:50.520 "memory_domains": [ 00:10:50.520 { 00:10:50.520 "dma_device_id": "system", 00:10:50.520 "dma_device_type": 1 00:10:50.520 }, 00:10:50.520 { 00:10:50.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.520 "dma_device_type": 2 00:10:50.520 }, 00:10:50.520 { 00:10:50.520 "dma_device_id": "system", 00:10:50.520 "dma_device_type": 1 00:10:50.520 }, 00:10:50.520 { 00:10:50.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.520 "dma_device_type": 2 00:10:50.520 }, 00:10:50.520 { 00:10:50.520 "dma_device_id": "system", 00:10:50.520 "dma_device_type": 1 00:10:50.520 }, 00:10:50.520 { 00:10:50.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.520 "dma_device_type": 2 00:10:50.520 } 00:10:50.520 ], 00:10:50.520 "driver_specific": { 00:10:50.520 "raid": { 00:10:50.521 "uuid": "8e130f13-e1e4-4cd5-89f5-8d99c61c9802", 00:10:50.521 "strip_size_kb": 64, 00:10:50.521 "state": "online", 00:10:50.521 "raid_level": "concat", 00:10:50.521 "superblock": true, 00:10:50.521 "num_base_bdevs": 3, 00:10:50.521 "num_base_bdevs_discovered": 3, 00:10:50.521 "num_base_bdevs_operational": 3, 00:10:50.521 "base_bdevs_list": [ 00:10:50.521 { 00:10:50.521 "name": "BaseBdev1", 00:10:50.521 "uuid": "6f8edc9c-35a9-4ad9-9039-ed1d43c666e6", 00:10:50.521 "is_configured": true, 00:10:50.521 "data_offset": 2048, 00:10:50.521 "data_size": 63488 00:10:50.521 }, 00:10:50.521 { 00:10:50.521 "name": "BaseBdev2", 00:10:50.521 "uuid": "48066a70-f5b9-4fad-be33-adf5746b6c55", 00:10:50.521 "is_configured": true, 00:10:50.521 "data_offset": 2048, 00:10:50.521 "data_size": 63488 00:10:50.521 }, 00:10:50.521 { 00:10:50.521 "name": "BaseBdev3", 00:10:50.521 "uuid": "cd57212a-4e21-43c9-97ee-9389e3e8467d", 00:10:50.521 "is_configured": true, 00:10:50.521 "data_offset": 2048, 00:10:50.521 "data_size": 63488 00:10:50.521 } 00:10:50.521 ] 00:10:50.521 } 00:10:50.521 } 00:10:50.521 }' 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:50.521 BaseBdev2 00:10:50.521 BaseBdev3' 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.521 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.521 [2024-12-06 04:01:43.843549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:50.521 [2024-12-06 04:01:43.843628] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.521 [2024-12-06 04:01:43.843724] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.781 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.781 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:50.781 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:50.781 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:50.781 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:50.781 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:50.781 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:50.781 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.781 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:50.781 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.781 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.781 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:50.781 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.781 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.781 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.781 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.781 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.781 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.781 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.781 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.781 04:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.781 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.781 "name": "Existed_Raid", 00:10:50.781 "uuid": "8e130f13-e1e4-4cd5-89f5-8d99c61c9802", 00:10:50.781 "strip_size_kb": 64, 00:10:50.781 "state": "offline", 00:10:50.781 "raid_level": "concat", 00:10:50.781 "superblock": true, 00:10:50.781 "num_base_bdevs": 3, 00:10:50.781 "num_base_bdevs_discovered": 2, 00:10:50.781 "num_base_bdevs_operational": 2, 00:10:50.781 "base_bdevs_list": [ 00:10:50.781 { 00:10:50.781 "name": null, 00:10:50.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.781 "is_configured": false, 00:10:50.781 "data_offset": 0, 00:10:50.781 "data_size": 63488 00:10:50.781 }, 00:10:50.781 { 00:10:50.781 "name": "BaseBdev2", 00:10:50.781 "uuid": "48066a70-f5b9-4fad-be33-adf5746b6c55", 00:10:50.781 "is_configured": true, 00:10:50.781 "data_offset": 2048, 00:10:50.781 "data_size": 63488 00:10:50.781 }, 00:10:50.781 { 00:10:50.781 "name": "BaseBdev3", 00:10:50.781 "uuid": "cd57212a-4e21-43c9-97ee-9389e3e8467d", 00:10:50.781 "is_configured": true, 00:10:50.781 "data_offset": 2048, 00:10:50.782 "data_size": 63488 00:10:50.782 } 00:10:50.782 ] 00:10:50.782 }' 00:10:50.782 04:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.782 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.042 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:51.042 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:51.042 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.042 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.042 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.042 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:51.042 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.302 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:51.302 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:51.302 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:51.302 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.302 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.302 [2024-12-06 04:01:44.402214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:51.302 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.302 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:51.302 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:51.302 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.302 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.302 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:51.302 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.302 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.302 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:51.302 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:51.302 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:51.302 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.302 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.302 [2024-12-06 04:01:44.557180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:51.302 [2024-12-06 04:01:44.557278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.563 BaseBdev2 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.563 [ 00:10:51.563 { 00:10:51.563 "name": "BaseBdev2", 00:10:51.563 "aliases": [ 00:10:51.563 "cab88324-2f70-480e-9162-113cf761ed6d" 00:10:51.563 ], 00:10:51.563 "product_name": "Malloc disk", 00:10:51.563 "block_size": 512, 00:10:51.563 "num_blocks": 65536, 00:10:51.563 "uuid": "cab88324-2f70-480e-9162-113cf761ed6d", 00:10:51.563 "assigned_rate_limits": { 00:10:51.563 "rw_ios_per_sec": 0, 00:10:51.563 "rw_mbytes_per_sec": 0, 00:10:51.563 "r_mbytes_per_sec": 0, 00:10:51.563 "w_mbytes_per_sec": 0 00:10:51.563 }, 00:10:51.563 "claimed": false, 00:10:51.563 "zoned": false, 00:10:51.563 "supported_io_types": { 00:10:51.563 "read": true, 00:10:51.563 "write": true, 00:10:51.563 "unmap": true, 00:10:51.563 "flush": true, 00:10:51.563 "reset": true, 00:10:51.563 "nvme_admin": false, 00:10:51.563 "nvme_io": false, 00:10:51.563 "nvme_io_md": false, 00:10:51.563 "write_zeroes": true, 00:10:51.563 "zcopy": true, 00:10:51.563 "get_zone_info": false, 00:10:51.563 "zone_management": false, 00:10:51.563 "zone_append": false, 00:10:51.563 "compare": false, 00:10:51.563 "compare_and_write": false, 00:10:51.563 "abort": true, 00:10:51.563 "seek_hole": false, 00:10:51.563 "seek_data": false, 00:10:51.563 "copy": true, 00:10:51.563 "nvme_iov_md": false 00:10:51.563 }, 00:10:51.563 "memory_domains": [ 00:10:51.563 { 00:10:51.563 "dma_device_id": "system", 00:10:51.563 "dma_device_type": 1 00:10:51.563 }, 00:10:51.563 { 00:10:51.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.563 "dma_device_type": 2 00:10:51.563 } 00:10:51.563 ], 00:10:51.563 "driver_specific": {} 00:10:51.563 } 00:10:51.563 ] 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.563 BaseBdev3 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:51.563 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.564 [ 00:10:51.564 { 00:10:51.564 "name": "BaseBdev3", 00:10:51.564 "aliases": [ 00:10:51.564 "ab4595dc-8c99-4154-8351-56fd9adb62c8" 00:10:51.564 ], 00:10:51.564 "product_name": "Malloc disk", 00:10:51.564 "block_size": 512, 00:10:51.564 "num_blocks": 65536, 00:10:51.564 "uuid": "ab4595dc-8c99-4154-8351-56fd9adb62c8", 00:10:51.564 "assigned_rate_limits": { 00:10:51.564 "rw_ios_per_sec": 0, 00:10:51.564 "rw_mbytes_per_sec": 0, 00:10:51.564 "r_mbytes_per_sec": 0, 00:10:51.564 "w_mbytes_per_sec": 0 00:10:51.564 }, 00:10:51.564 "claimed": false, 00:10:51.564 "zoned": false, 00:10:51.564 "supported_io_types": { 00:10:51.564 "read": true, 00:10:51.564 "write": true, 00:10:51.564 "unmap": true, 00:10:51.564 "flush": true, 00:10:51.564 "reset": true, 00:10:51.564 "nvme_admin": false, 00:10:51.564 "nvme_io": false, 00:10:51.564 "nvme_io_md": false, 00:10:51.564 "write_zeroes": true, 00:10:51.564 "zcopy": true, 00:10:51.564 "get_zone_info": false, 00:10:51.564 "zone_management": false, 00:10:51.564 "zone_append": false, 00:10:51.564 "compare": false, 00:10:51.564 "compare_and_write": false, 00:10:51.564 "abort": true, 00:10:51.564 "seek_hole": false, 00:10:51.564 "seek_data": false, 00:10:51.564 "copy": true, 00:10:51.564 "nvme_iov_md": false 00:10:51.564 }, 00:10:51.564 "memory_domains": [ 00:10:51.564 { 00:10:51.564 "dma_device_id": "system", 00:10:51.564 "dma_device_type": 1 00:10:51.564 }, 00:10:51.564 { 00:10:51.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.564 "dma_device_type": 2 00:10:51.564 } 00:10:51.564 ], 00:10:51.564 "driver_specific": {} 00:10:51.564 } 00:10:51.564 ] 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.564 [2024-12-06 04:01:44.884419] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:51.564 [2024-12-06 04:01:44.884527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:51.564 [2024-12-06 04:01:44.884599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:51.564 [2024-12-06 04:01:44.886606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.564 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.852 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.852 "name": "Existed_Raid", 00:10:51.852 "uuid": "59c5ac25-f3b3-48da-abda-7cb6ba5a3c84", 00:10:51.852 "strip_size_kb": 64, 00:10:51.852 "state": "configuring", 00:10:51.852 "raid_level": "concat", 00:10:51.852 "superblock": true, 00:10:51.852 "num_base_bdevs": 3, 00:10:51.852 "num_base_bdevs_discovered": 2, 00:10:51.852 "num_base_bdevs_operational": 3, 00:10:51.852 "base_bdevs_list": [ 00:10:51.852 { 00:10:51.852 "name": "BaseBdev1", 00:10:51.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.852 "is_configured": false, 00:10:51.852 "data_offset": 0, 00:10:51.852 "data_size": 0 00:10:51.852 }, 00:10:51.852 { 00:10:51.852 "name": "BaseBdev2", 00:10:51.852 "uuid": "cab88324-2f70-480e-9162-113cf761ed6d", 00:10:51.852 "is_configured": true, 00:10:51.852 "data_offset": 2048, 00:10:51.852 "data_size": 63488 00:10:51.852 }, 00:10:51.852 { 00:10:51.852 "name": "BaseBdev3", 00:10:51.852 "uuid": "ab4595dc-8c99-4154-8351-56fd9adb62c8", 00:10:51.852 "is_configured": true, 00:10:51.852 "data_offset": 2048, 00:10:51.852 "data_size": 63488 00:10:51.852 } 00:10:51.852 ] 00:10:51.852 }' 00:10:51.852 04:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.852 04:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.116 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:52.116 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.116 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.116 [2024-12-06 04:01:45.319725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:52.116 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.116 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:52.116 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.116 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.116 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.116 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.116 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.116 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.116 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.116 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.116 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.116 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.116 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.116 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.116 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.116 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.116 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.116 "name": "Existed_Raid", 00:10:52.116 "uuid": "59c5ac25-f3b3-48da-abda-7cb6ba5a3c84", 00:10:52.116 "strip_size_kb": 64, 00:10:52.116 "state": "configuring", 00:10:52.116 "raid_level": "concat", 00:10:52.116 "superblock": true, 00:10:52.116 "num_base_bdevs": 3, 00:10:52.116 "num_base_bdevs_discovered": 1, 00:10:52.116 "num_base_bdevs_operational": 3, 00:10:52.116 "base_bdevs_list": [ 00:10:52.116 { 00:10:52.116 "name": "BaseBdev1", 00:10:52.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.116 "is_configured": false, 00:10:52.116 "data_offset": 0, 00:10:52.116 "data_size": 0 00:10:52.116 }, 00:10:52.116 { 00:10:52.116 "name": null, 00:10:52.116 "uuid": "cab88324-2f70-480e-9162-113cf761ed6d", 00:10:52.116 "is_configured": false, 00:10:52.116 "data_offset": 0, 00:10:52.116 "data_size": 63488 00:10:52.116 }, 00:10:52.116 { 00:10:52.116 "name": "BaseBdev3", 00:10:52.116 "uuid": "ab4595dc-8c99-4154-8351-56fd9adb62c8", 00:10:52.116 "is_configured": true, 00:10:52.116 "data_offset": 2048, 00:10:52.116 "data_size": 63488 00:10:52.116 } 00:10:52.116 ] 00:10:52.116 }' 00:10:52.116 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.116 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.684 [2024-12-06 04:01:45.865761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.684 BaseBdev1 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.684 [ 00:10:52.684 { 00:10:52.684 "name": "BaseBdev1", 00:10:52.684 "aliases": [ 00:10:52.684 "3fd7eb78-04a0-4586-9073-83d6c157240f" 00:10:52.684 ], 00:10:52.684 "product_name": "Malloc disk", 00:10:52.684 "block_size": 512, 00:10:52.684 "num_blocks": 65536, 00:10:52.684 "uuid": "3fd7eb78-04a0-4586-9073-83d6c157240f", 00:10:52.684 "assigned_rate_limits": { 00:10:52.684 "rw_ios_per_sec": 0, 00:10:52.684 "rw_mbytes_per_sec": 0, 00:10:52.684 "r_mbytes_per_sec": 0, 00:10:52.684 "w_mbytes_per_sec": 0 00:10:52.684 }, 00:10:52.684 "claimed": true, 00:10:52.684 "claim_type": "exclusive_write", 00:10:52.684 "zoned": false, 00:10:52.684 "supported_io_types": { 00:10:52.684 "read": true, 00:10:52.684 "write": true, 00:10:52.684 "unmap": true, 00:10:52.684 "flush": true, 00:10:52.684 "reset": true, 00:10:52.684 "nvme_admin": false, 00:10:52.684 "nvme_io": false, 00:10:52.684 "nvme_io_md": false, 00:10:52.684 "write_zeroes": true, 00:10:52.684 "zcopy": true, 00:10:52.684 "get_zone_info": false, 00:10:52.684 "zone_management": false, 00:10:52.684 "zone_append": false, 00:10:52.684 "compare": false, 00:10:52.684 "compare_and_write": false, 00:10:52.684 "abort": true, 00:10:52.684 "seek_hole": false, 00:10:52.684 "seek_data": false, 00:10:52.684 "copy": true, 00:10:52.684 "nvme_iov_md": false 00:10:52.684 }, 00:10:52.684 "memory_domains": [ 00:10:52.684 { 00:10:52.684 "dma_device_id": "system", 00:10:52.684 "dma_device_type": 1 00:10:52.684 }, 00:10:52.684 { 00:10:52.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.684 "dma_device_type": 2 00:10:52.684 } 00:10:52.684 ], 00:10:52.684 "driver_specific": {} 00:10:52.684 } 00:10:52.684 ] 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.684 "name": "Existed_Raid", 00:10:52.684 "uuid": "59c5ac25-f3b3-48da-abda-7cb6ba5a3c84", 00:10:52.684 "strip_size_kb": 64, 00:10:52.684 "state": "configuring", 00:10:52.684 "raid_level": "concat", 00:10:52.684 "superblock": true, 00:10:52.684 "num_base_bdevs": 3, 00:10:52.684 "num_base_bdevs_discovered": 2, 00:10:52.684 "num_base_bdevs_operational": 3, 00:10:52.684 "base_bdevs_list": [ 00:10:52.684 { 00:10:52.684 "name": "BaseBdev1", 00:10:52.684 "uuid": "3fd7eb78-04a0-4586-9073-83d6c157240f", 00:10:52.684 "is_configured": true, 00:10:52.684 "data_offset": 2048, 00:10:52.684 "data_size": 63488 00:10:52.684 }, 00:10:52.684 { 00:10:52.684 "name": null, 00:10:52.684 "uuid": "cab88324-2f70-480e-9162-113cf761ed6d", 00:10:52.684 "is_configured": false, 00:10:52.684 "data_offset": 0, 00:10:52.684 "data_size": 63488 00:10:52.684 }, 00:10:52.684 { 00:10:52.684 "name": "BaseBdev3", 00:10:52.684 "uuid": "ab4595dc-8c99-4154-8351-56fd9adb62c8", 00:10:52.684 "is_configured": true, 00:10:52.684 "data_offset": 2048, 00:10:52.684 "data_size": 63488 00:10:52.684 } 00:10:52.684 ] 00:10:52.684 }' 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.684 04:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.250 [2024-12-06 04:01:46.408940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.250 "name": "Existed_Raid", 00:10:53.250 "uuid": "59c5ac25-f3b3-48da-abda-7cb6ba5a3c84", 00:10:53.250 "strip_size_kb": 64, 00:10:53.250 "state": "configuring", 00:10:53.250 "raid_level": "concat", 00:10:53.250 "superblock": true, 00:10:53.250 "num_base_bdevs": 3, 00:10:53.250 "num_base_bdevs_discovered": 1, 00:10:53.250 "num_base_bdevs_operational": 3, 00:10:53.250 "base_bdevs_list": [ 00:10:53.250 { 00:10:53.250 "name": "BaseBdev1", 00:10:53.250 "uuid": "3fd7eb78-04a0-4586-9073-83d6c157240f", 00:10:53.250 "is_configured": true, 00:10:53.250 "data_offset": 2048, 00:10:53.250 "data_size": 63488 00:10:53.250 }, 00:10:53.250 { 00:10:53.250 "name": null, 00:10:53.250 "uuid": "cab88324-2f70-480e-9162-113cf761ed6d", 00:10:53.250 "is_configured": false, 00:10:53.250 "data_offset": 0, 00:10:53.250 "data_size": 63488 00:10:53.250 }, 00:10:53.250 { 00:10:53.250 "name": null, 00:10:53.250 "uuid": "ab4595dc-8c99-4154-8351-56fd9adb62c8", 00:10:53.250 "is_configured": false, 00:10:53.250 "data_offset": 0, 00:10:53.250 "data_size": 63488 00:10:53.250 } 00:10:53.250 ] 00:10:53.250 }' 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.250 04:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.508 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.767 [2024-12-06 04:01:46.916145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.767 "name": "Existed_Raid", 00:10:53.767 "uuid": "59c5ac25-f3b3-48da-abda-7cb6ba5a3c84", 00:10:53.767 "strip_size_kb": 64, 00:10:53.767 "state": "configuring", 00:10:53.767 "raid_level": "concat", 00:10:53.767 "superblock": true, 00:10:53.767 "num_base_bdevs": 3, 00:10:53.767 "num_base_bdevs_discovered": 2, 00:10:53.767 "num_base_bdevs_operational": 3, 00:10:53.767 "base_bdevs_list": [ 00:10:53.767 { 00:10:53.767 "name": "BaseBdev1", 00:10:53.767 "uuid": "3fd7eb78-04a0-4586-9073-83d6c157240f", 00:10:53.767 "is_configured": true, 00:10:53.767 "data_offset": 2048, 00:10:53.767 "data_size": 63488 00:10:53.767 }, 00:10:53.767 { 00:10:53.767 "name": null, 00:10:53.767 "uuid": "cab88324-2f70-480e-9162-113cf761ed6d", 00:10:53.767 "is_configured": false, 00:10:53.767 "data_offset": 0, 00:10:53.767 "data_size": 63488 00:10:53.767 }, 00:10:53.767 { 00:10:53.767 "name": "BaseBdev3", 00:10:53.767 "uuid": "ab4595dc-8c99-4154-8351-56fd9adb62c8", 00:10:53.767 "is_configured": true, 00:10:53.767 "data_offset": 2048, 00:10:53.767 "data_size": 63488 00:10:53.767 } 00:10:53.767 ] 00:10:53.767 }' 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.767 04:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.025 04:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.284 [2024-12-06 04:01:47.431287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.284 "name": "Existed_Raid", 00:10:54.284 "uuid": "59c5ac25-f3b3-48da-abda-7cb6ba5a3c84", 00:10:54.284 "strip_size_kb": 64, 00:10:54.284 "state": "configuring", 00:10:54.284 "raid_level": "concat", 00:10:54.284 "superblock": true, 00:10:54.284 "num_base_bdevs": 3, 00:10:54.284 "num_base_bdevs_discovered": 1, 00:10:54.284 "num_base_bdevs_operational": 3, 00:10:54.284 "base_bdevs_list": [ 00:10:54.284 { 00:10:54.284 "name": null, 00:10:54.284 "uuid": "3fd7eb78-04a0-4586-9073-83d6c157240f", 00:10:54.284 "is_configured": false, 00:10:54.284 "data_offset": 0, 00:10:54.284 "data_size": 63488 00:10:54.284 }, 00:10:54.284 { 00:10:54.284 "name": null, 00:10:54.284 "uuid": "cab88324-2f70-480e-9162-113cf761ed6d", 00:10:54.284 "is_configured": false, 00:10:54.284 "data_offset": 0, 00:10:54.284 "data_size": 63488 00:10:54.284 }, 00:10:54.284 { 00:10:54.284 "name": "BaseBdev3", 00:10:54.284 "uuid": "ab4595dc-8c99-4154-8351-56fd9adb62c8", 00:10:54.284 "is_configured": true, 00:10:54.284 "data_offset": 2048, 00:10:54.284 "data_size": 63488 00:10:54.284 } 00:10:54.284 ] 00:10:54.284 }' 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.284 04:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.851 04:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.851 04:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:54.851 04:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.851 04:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.851 04:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.851 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:54.851 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:54.851 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.851 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.851 [2024-12-06 04:01:48.021510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.851 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.851 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:54.851 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.851 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.851 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.851 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.851 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.851 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.851 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.851 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.851 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.851 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.851 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.851 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.851 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.851 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.851 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.851 "name": "Existed_Raid", 00:10:54.851 "uuid": "59c5ac25-f3b3-48da-abda-7cb6ba5a3c84", 00:10:54.851 "strip_size_kb": 64, 00:10:54.851 "state": "configuring", 00:10:54.851 "raid_level": "concat", 00:10:54.851 "superblock": true, 00:10:54.851 "num_base_bdevs": 3, 00:10:54.851 "num_base_bdevs_discovered": 2, 00:10:54.851 "num_base_bdevs_operational": 3, 00:10:54.851 "base_bdevs_list": [ 00:10:54.851 { 00:10:54.851 "name": null, 00:10:54.851 "uuid": "3fd7eb78-04a0-4586-9073-83d6c157240f", 00:10:54.851 "is_configured": false, 00:10:54.851 "data_offset": 0, 00:10:54.851 "data_size": 63488 00:10:54.851 }, 00:10:54.851 { 00:10:54.851 "name": "BaseBdev2", 00:10:54.851 "uuid": "cab88324-2f70-480e-9162-113cf761ed6d", 00:10:54.851 "is_configured": true, 00:10:54.851 "data_offset": 2048, 00:10:54.851 "data_size": 63488 00:10:54.851 }, 00:10:54.851 { 00:10:54.851 "name": "BaseBdev3", 00:10:54.851 "uuid": "ab4595dc-8c99-4154-8351-56fd9adb62c8", 00:10:54.851 "is_configured": true, 00:10:54.851 "data_offset": 2048, 00:10:54.852 "data_size": 63488 00:10:54.852 } 00:10:54.852 ] 00:10:54.852 }' 00:10:54.852 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.852 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.109 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:55.109 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.109 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.109 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.109 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.367 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:55.367 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.367 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:55.367 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.367 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.367 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.367 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3fd7eb78-04a0-4586-9073-83d6c157240f 00:10:55.367 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.367 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.367 [2024-12-06 04:01:48.553370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:55.367 [2024-12-06 04:01:48.553719] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:55.367 [2024-12-06 04:01:48.553764] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:55.367 [2024-12-06 04:01:48.554096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:55.367 [2024-12-06 04:01:48.554284] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:55.367 NewBaseBdev 00:10:55.367 [2024-12-06 04:01:48.554326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:55.367 [2024-12-06 04:01:48.554508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.367 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.367 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:55.367 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:55.367 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.368 [ 00:10:55.368 { 00:10:55.368 "name": "NewBaseBdev", 00:10:55.368 "aliases": [ 00:10:55.368 "3fd7eb78-04a0-4586-9073-83d6c157240f" 00:10:55.368 ], 00:10:55.368 "product_name": "Malloc disk", 00:10:55.368 "block_size": 512, 00:10:55.368 "num_blocks": 65536, 00:10:55.368 "uuid": "3fd7eb78-04a0-4586-9073-83d6c157240f", 00:10:55.368 "assigned_rate_limits": { 00:10:55.368 "rw_ios_per_sec": 0, 00:10:55.368 "rw_mbytes_per_sec": 0, 00:10:55.368 "r_mbytes_per_sec": 0, 00:10:55.368 "w_mbytes_per_sec": 0 00:10:55.368 }, 00:10:55.368 "claimed": true, 00:10:55.368 "claim_type": "exclusive_write", 00:10:55.368 "zoned": false, 00:10:55.368 "supported_io_types": { 00:10:55.368 "read": true, 00:10:55.368 "write": true, 00:10:55.368 "unmap": true, 00:10:55.368 "flush": true, 00:10:55.368 "reset": true, 00:10:55.368 "nvme_admin": false, 00:10:55.368 "nvme_io": false, 00:10:55.368 "nvme_io_md": false, 00:10:55.368 "write_zeroes": true, 00:10:55.368 "zcopy": true, 00:10:55.368 "get_zone_info": false, 00:10:55.368 "zone_management": false, 00:10:55.368 "zone_append": false, 00:10:55.368 "compare": false, 00:10:55.368 "compare_and_write": false, 00:10:55.368 "abort": true, 00:10:55.368 "seek_hole": false, 00:10:55.368 "seek_data": false, 00:10:55.368 "copy": true, 00:10:55.368 "nvme_iov_md": false 00:10:55.368 }, 00:10:55.368 "memory_domains": [ 00:10:55.368 { 00:10:55.368 "dma_device_id": "system", 00:10:55.368 "dma_device_type": 1 00:10:55.368 }, 00:10:55.368 { 00:10:55.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.368 "dma_device_type": 2 00:10:55.368 } 00:10:55.368 ], 00:10:55.368 "driver_specific": {} 00:10:55.368 } 00:10:55.368 ] 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.368 "name": "Existed_Raid", 00:10:55.368 "uuid": "59c5ac25-f3b3-48da-abda-7cb6ba5a3c84", 00:10:55.368 "strip_size_kb": 64, 00:10:55.368 "state": "online", 00:10:55.368 "raid_level": "concat", 00:10:55.368 "superblock": true, 00:10:55.368 "num_base_bdevs": 3, 00:10:55.368 "num_base_bdevs_discovered": 3, 00:10:55.368 "num_base_bdevs_operational": 3, 00:10:55.368 "base_bdevs_list": [ 00:10:55.368 { 00:10:55.368 "name": "NewBaseBdev", 00:10:55.368 "uuid": "3fd7eb78-04a0-4586-9073-83d6c157240f", 00:10:55.368 "is_configured": true, 00:10:55.368 "data_offset": 2048, 00:10:55.368 "data_size": 63488 00:10:55.368 }, 00:10:55.368 { 00:10:55.368 "name": "BaseBdev2", 00:10:55.368 "uuid": "cab88324-2f70-480e-9162-113cf761ed6d", 00:10:55.368 "is_configured": true, 00:10:55.368 "data_offset": 2048, 00:10:55.368 "data_size": 63488 00:10:55.368 }, 00:10:55.368 { 00:10:55.368 "name": "BaseBdev3", 00:10:55.368 "uuid": "ab4595dc-8c99-4154-8351-56fd9adb62c8", 00:10:55.368 "is_configured": true, 00:10:55.368 "data_offset": 2048, 00:10:55.368 "data_size": 63488 00:10:55.368 } 00:10:55.368 ] 00:10:55.368 }' 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.368 04:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.936 [2024-12-06 04:01:49.088911] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:55.936 "name": "Existed_Raid", 00:10:55.936 "aliases": [ 00:10:55.936 "59c5ac25-f3b3-48da-abda-7cb6ba5a3c84" 00:10:55.936 ], 00:10:55.936 "product_name": "Raid Volume", 00:10:55.936 "block_size": 512, 00:10:55.936 "num_blocks": 190464, 00:10:55.936 "uuid": "59c5ac25-f3b3-48da-abda-7cb6ba5a3c84", 00:10:55.936 "assigned_rate_limits": { 00:10:55.936 "rw_ios_per_sec": 0, 00:10:55.936 "rw_mbytes_per_sec": 0, 00:10:55.936 "r_mbytes_per_sec": 0, 00:10:55.936 "w_mbytes_per_sec": 0 00:10:55.936 }, 00:10:55.936 "claimed": false, 00:10:55.936 "zoned": false, 00:10:55.936 "supported_io_types": { 00:10:55.936 "read": true, 00:10:55.936 "write": true, 00:10:55.936 "unmap": true, 00:10:55.936 "flush": true, 00:10:55.936 "reset": true, 00:10:55.936 "nvme_admin": false, 00:10:55.936 "nvme_io": false, 00:10:55.936 "nvme_io_md": false, 00:10:55.936 "write_zeroes": true, 00:10:55.936 "zcopy": false, 00:10:55.936 "get_zone_info": false, 00:10:55.936 "zone_management": false, 00:10:55.936 "zone_append": false, 00:10:55.936 "compare": false, 00:10:55.936 "compare_and_write": false, 00:10:55.936 "abort": false, 00:10:55.936 "seek_hole": false, 00:10:55.936 "seek_data": false, 00:10:55.936 "copy": false, 00:10:55.936 "nvme_iov_md": false 00:10:55.936 }, 00:10:55.936 "memory_domains": [ 00:10:55.936 { 00:10:55.936 "dma_device_id": "system", 00:10:55.936 "dma_device_type": 1 00:10:55.936 }, 00:10:55.936 { 00:10:55.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.936 "dma_device_type": 2 00:10:55.936 }, 00:10:55.936 { 00:10:55.936 "dma_device_id": "system", 00:10:55.936 "dma_device_type": 1 00:10:55.936 }, 00:10:55.936 { 00:10:55.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.936 "dma_device_type": 2 00:10:55.936 }, 00:10:55.936 { 00:10:55.936 "dma_device_id": "system", 00:10:55.936 "dma_device_type": 1 00:10:55.936 }, 00:10:55.936 { 00:10:55.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.936 "dma_device_type": 2 00:10:55.936 } 00:10:55.936 ], 00:10:55.936 "driver_specific": { 00:10:55.936 "raid": { 00:10:55.936 "uuid": "59c5ac25-f3b3-48da-abda-7cb6ba5a3c84", 00:10:55.936 "strip_size_kb": 64, 00:10:55.936 "state": "online", 00:10:55.936 "raid_level": "concat", 00:10:55.936 "superblock": true, 00:10:55.936 "num_base_bdevs": 3, 00:10:55.936 "num_base_bdevs_discovered": 3, 00:10:55.936 "num_base_bdevs_operational": 3, 00:10:55.936 "base_bdevs_list": [ 00:10:55.936 { 00:10:55.936 "name": "NewBaseBdev", 00:10:55.936 "uuid": "3fd7eb78-04a0-4586-9073-83d6c157240f", 00:10:55.936 "is_configured": true, 00:10:55.936 "data_offset": 2048, 00:10:55.936 "data_size": 63488 00:10:55.936 }, 00:10:55.936 { 00:10:55.936 "name": "BaseBdev2", 00:10:55.936 "uuid": "cab88324-2f70-480e-9162-113cf761ed6d", 00:10:55.936 "is_configured": true, 00:10:55.936 "data_offset": 2048, 00:10:55.936 "data_size": 63488 00:10:55.936 }, 00:10:55.936 { 00:10:55.936 "name": "BaseBdev3", 00:10:55.936 "uuid": "ab4595dc-8c99-4154-8351-56fd9adb62c8", 00:10:55.936 "is_configured": true, 00:10:55.936 "data_offset": 2048, 00:10:55.936 "data_size": 63488 00:10:55.936 } 00:10:55.936 ] 00:10:55.936 } 00:10:55.936 } 00:10:55.936 }' 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:55.936 BaseBdev2 00:10:55.936 BaseBdev3' 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.936 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.196 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.196 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.196 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.196 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.196 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:56.196 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.196 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.196 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.196 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.196 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.196 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:56.196 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.196 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.196 [2024-12-06 04:01:49.328219] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:56.196 [2024-12-06 04:01:49.328313] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.196 [2024-12-06 04:01:49.328433] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.196 [2024-12-06 04:01:49.328524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.196 [2024-12-06 04:01:49.328576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:56.196 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.196 04:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66322 00:10:56.196 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66322 ']' 00:10:56.196 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66322 00:10:56.196 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:56.196 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.196 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66322 00:10:56.197 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:56.197 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:56.197 killing process with pid 66322 00:10:56.197 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66322' 00:10:56.197 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66322 00:10:56.197 [2024-12-06 04:01:49.376387] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:56.197 04:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66322 00:10:56.457 [2024-12-06 04:01:49.681064] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:57.849 04:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:57.849 00:10:57.849 real 0m10.676s 00:10:57.849 user 0m17.013s 00:10:57.849 sys 0m1.840s 00:10:57.849 04:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.849 ************************************ 00:10:57.849 END TEST raid_state_function_test_sb 00:10:57.849 ************************************ 00:10:57.849 04:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.849 04:01:50 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:10:57.849 04:01:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:57.849 04:01:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.849 04:01:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:57.849 ************************************ 00:10:57.849 START TEST raid_superblock_test 00:10:57.849 ************************************ 00:10:57.849 04:01:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:10:57.849 04:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:57.849 04:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:57.849 04:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:57.849 04:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:57.849 04:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:57.849 04:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:57.849 04:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:57.849 04:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:57.849 04:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:57.849 04:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:57.849 04:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:57.849 04:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:57.849 04:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:57.849 04:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:57.849 04:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:57.849 04:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:57.849 04:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:57.849 04:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66944 00:10:57.849 04:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66944 00:10:57.849 04:01:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66944 ']' 00:10:57.850 04:01:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.850 04:01:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.850 04:01:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.850 04:01:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.850 04:01:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.850 [2024-12-06 04:01:50.966892] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:10:57.850 [2024-12-06 04:01:50.967160] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66944 ] 00:10:57.850 [2024-12-06 04:01:51.151145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.108 [2024-12-06 04:01:51.266246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.368 [2024-12-06 04:01:51.472682] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.368 [2024-12-06 04:01:51.472822] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.628 malloc1 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.628 [2024-12-06 04:01:51.865127] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:58.628 [2024-12-06 04:01:51.865242] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.628 [2024-12-06 04:01:51.865286] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:58.628 [2024-12-06 04:01:51.865373] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.628 [2024-12-06 04:01:51.867460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.628 [2024-12-06 04:01:51.867530] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:58.628 pt1 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.628 malloc2 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.628 [2024-12-06 04:01:51.925239] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:58.628 [2024-12-06 04:01:51.925338] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.628 [2024-12-06 04:01:51.925382] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:58.628 [2024-12-06 04:01:51.925415] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.628 [2024-12-06 04:01:51.927441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.628 [2024-12-06 04:01:51.927505] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:58.628 pt2 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.628 04:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.888 malloc3 00:10:58.888 04:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.888 04:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:58.888 04:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.888 04:01:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.888 [2024-12-06 04:01:51.997467] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:58.888 [2024-12-06 04:01:51.997563] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.888 [2024-12-06 04:01:51.997601] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:58.888 [2024-12-06 04:01:51.997636] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.888 [2024-12-06 04:01:51.999828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.888 pt3 00:10:58.888 [2024-12-06 04:01:51.999899] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:58.888 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.888 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:58.888 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:58.888 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:58.888 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.888 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.888 [2024-12-06 04:01:52.009516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:58.888 [2024-12-06 04:01:52.011436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:58.888 [2024-12-06 04:01:52.011541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:58.888 [2024-12-06 04:01:52.011720] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:58.888 [2024-12-06 04:01:52.011769] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:58.888 [2024-12-06 04:01:52.012022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:58.888 [2024-12-06 04:01:52.012231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:58.888 [2024-12-06 04:01:52.012298] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:58.888 [2024-12-06 04:01:52.012520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.888 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.888 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:58.888 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.888 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.888 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.888 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.888 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.888 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.888 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.888 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.888 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.888 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.888 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.888 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.888 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.888 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.888 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.888 "name": "raid_bdev1", 00:10:58.888 "uuid": "b8fc94db-b093-4a1b-a1ac-9f2ca63220d9", 00:10:58.888 "strip_size_kb": 64, 00:10:58.888 "state": "online", 00:10:58.888 "raid_level": "concat", 00:10:58.888 "superblock": true, 00:10:58.888 "num_base_bdevs": 3, 00:10:58.888 "num_base_bdevs_discovered": 3, 00:10:58.888 "num_base_bdevs_operational": 3, 00:10:58.888 "base_bdevs_list": [ 00:10:58.888 { 00:10:58.888 "name": "pt1", 00:10:58.888 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.888 "is_configured": true, 00:10:58.888 "data_offset": 2048, 00:10:58.888 "data_size": 63488 00:10:58.888 }, 00:10:58.888 { 00:10:58.888 "name": "pt2", 00:10:58.889 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.889 "is_configured": true, 00:10:58.889 "data_offset": 2048, 00:10:58.889 "data_size": 63488 00:10:58.889 }, 00:10:58.889 { 00:10:58.889 "name": "pt3", 00:10:58.889 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.889 "is_configured": true, 00:10:58.889 "data_offset": 2048, 00:10:58.889 "data_size": 63488 00:10:58.889 } 00:10:58.889 ] 00:10:58.889 }' 00:10:58.889 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.889 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.148 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:59.148 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:59.148 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:59.148 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:59.148 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:59.148 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:59.148 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:59.148 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.148 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.148 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:59.148 [2024-12-06 04:01:52.469056] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.148 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.408 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:59.408 "name": "raid_bdev1", 00:10:59.408 "aliases": [ 00:10:59.408 "b8fc94db-b093-4a1b-a1ac-9f2ca63220d9" 00:10:59.408 ], 00:10:59.408 "product_name": "Raid Volume", 00:10:59.408 "block_size": 512, 00:10:59.408 "num_blocks": 190464, 00:10:59.408 "uuid": "b8fc94db-b093-4a1b-a1ac-9f2ca63220d9", 00:10:59.408 "assigned_rate_limits": { 00:10:59.408 "rw_ios_per_sec": 0, 00:10:59.408 "rw_mbytes_per_sec": 0, 00:10:59.408 "r_mbytes_per_sec": 0, 00:10:59.408 "w_mbytes_per_sec": 0 00:10:59.408 }, 00:10:59.408 "claimed": false, 00:10:59.408 "zoned": false, 00:10:59.408 "supported_io_types": { 00:10:59.408 "read": true, 00:10:59.408 "write": true, 00:10:59.408 "unmap": true, 00:10:59.408 "flush": true, 00:10:59.408 "reset": true, 00:10:59.408 "nvme_admin": false, 00:10:59.408 "nvme_io": false, 00:10:59.408 "nvme_io_md": false, 00:10:59.408 "write_zeroes": true, 00:10:59.408 "zcopy": false, 00:10:59.408 "get_zone_info": false, 00:10:59.408 "zone_management": false, 00:10:59.408 "zone_append": false, 00:10:59.408 "compare": false, 00:10:59.408 "compare_and_write": false, 00:10:59.408 "abort": false, 00:10:59.408 "seek_hole": false, 00:10:59.408 "seek_data": false, 00:10:59.408 "copy": false, 00:10:59.408 "nvme_iov_md": false 00:10:59.408 }, 00:10:59.408 "memory_domains": [ 00:10:59.408 { 00:10:59.408 "dma_device_id": "system", 00:10:59.408 "dma_device_type": 1 00:10:59.408 }, 00:10:59.408 { 00:10:59.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.408 "dma_device_type": 2 00:10:59.408 }, 00:10:59.408 { 00:10:59.408 "dma_device_id": "system", 00:10:59.408 "dma_device_type": 1 00:10:59.408 }, 00:10:59.408 { 00:10:59.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.408 "dma_device_type": 2 00:10:59.408 }, 00:10:59.408 { 00:10:59.408 "dma_device_id": "system", 00:10:59.408 "dma_device_type": 1 00:10:59.408 }, 00:10:59.408 { 00:10:59.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.408 "dma_device_type": 2 00:10:59.408 } 00:10:59.408 ], 00:10:59.408 "driver_specific": { 00:10:59.408 "raid": { 00:10:59.408 "uuid": "b8fc94db-b093-4a1b-a1ac-9f2ca63220d9", 00:10:59.408 "strip_size_kb": 64, 00:10:59.408 "state": "online", 00:10:59.408 "raid_level": "concat", 00:10:59.408 "superblock": true, 00:10:59.408 "num_base_bdevs": 3, 00:10:59.409 "num_base_bdevs_discovered": 3, 00:10:59.409 "num_base_bdevs_operational": 3, 00:10:59.409 "base_bdevs_list": [ 00:10:59.409 { 00:10:59.409 "name": "pt1", 00:10:59.409 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:59.409 "is_configured": true, 00:10:59.409 "data_offset": 2048, 00:10:59.409 "data_size": 63488 00:10:59.409 }, 00:10:59.409 { 00:10:59.409 "name": "pt2", 00:10:59.409 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:59.409 "is_configured": true, 00:10:59.409 "data_offset": 2048, 00:10:59.409 "data_size": 63488 00:10:59.409 }, 00:10:59.409 { 00:10:59.409 "name": "pt3", 00:10:59.409 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:59.409 "is_configured": true, 00:10:59.409 "data_offset": 2048, 00:10:59.409 "data_size": 63488 00:10:59.409 } 00:10:59.409 ] 00:10:59.409 } 00:10:59.409 } 00:10:59.409 }' 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:59.409 pt2 00:10:59.409 pt3' 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.409 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:59.409 [2024-12-06 04:01:52.748480] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b8fc94db-b093-4a1b-a1ac-9f2ca63220d9 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b8fc94db-b093-4a1b-a1ac-9f2ca63220d9 ']' 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.669 [2024-12-06 04:01:52.776148] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:59.669 [2024-12-06 04:01:52.776214] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:59.669 [2024-12-06 04:01:52.776337] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.669 [2024-12-06 04:01:52.776471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:59.669 [2024-12-06 04:01:52.776518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.669 [2024-12-06 04:01:52.915963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:59.669 [2024-12-06 04:01:52.917951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:59.669 [2024-12-06 04:01:52.918060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:59.669 [2024-12-06 04:01:52.918131] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:59.669 [2024-12-06 04:01:52.918237] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:59.669 [2024-12-06 04:01:52.918316] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:59.669 [2024-12-06 04:01:52.918372] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:59.669 [2024-12-06 04:01:52.918409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:59.669 request: 00:10:59.669 { 00:10:59.669 "name": "raid_bdev1", 00:10:59.669 "raid_level": "concat", 00:10:59.669 "base_bdevs": [ 00:10:59.669 "malloc1", 00:10:59.669 "malloc2", 00:10:59.669 "malloc3" 00:10:59.669 ], 00:10:59.669 "strip_size_kb": 64, 00:10:59.669 "superblock": false, 00:10:59.669 "method": "bdev_raid_create", 00:10:59.669 "req_id": 1 00:10:59.669 } 00:10:59.669 Got JSON-RPC error response 00:10:59.669 response: 00:10:59.669 { 00:10:59.669 "code": -17, 00:10:59.669 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:59.669 } 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.669 [2024-12-06 04:01:52.967815] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:59.669 [2024-12-06 04:01:52.967859] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.669 [2024-12-06 04:01:52.967877] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:59.669 [2024-12-06 04:01:52.967886] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.669 [2024-12-06 04:01:52.970089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.669 [2024-12-06 04:01:52.970125] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:59.669 [2024-12-06 04:01:52.970202] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:59.669 [2024-12-06 04:01:52.970261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:59.669 pt1 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.669 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.670 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.670 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.670 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.670 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.670 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.670 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.670 04:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.670 04:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.929 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.929 "name": "raid_bdev1", 00:10:59.929 "uuid": "b8fc94db-b093-4a1b-a1ac-9f2ca63220d9", 00:10:59.929 "strip_size_kb": 64, 00:10:59.929 "state": "configuring", 00:10:59.929 "raid_level": "concat", 00:10:59.929 "superblock": true, 00:10:59.929 "num_base_bdevs": 3, 00:10:59.929 "num_base_bdevs_discovered": 1, 00:10:59.929 "num_base_bdevs_operational": 3, 00:10:59.929 "base_bdevs_list": [ 00:10:59.929 { 00:10:59.929 "name": "pt1", 00:10:59.929 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:59.929 "is_configured": true, 00:10:59.929 "data_offset": 2048, 00:10:59.929 "data_size": 63488 00:10:59.929 }, 00:10:59.929 { 00:10:59.929 "name": null, 00:10:59.929 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:59.929 "is_configured": false, 00:10:59.929 "data_offset": 2048, 00:10:59.929 "data_size": 63488 00:10:59.929 }, 00:10:59.929 { 00:10:59.929 "name": null, 00:10:59.929 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:59.929 "is_configured": false, 00:10:59.929 "data_offset": 2048, 00:10:59.929 "data_size": 63488 00:10:59.929 } 00:10:59.929 ] 00:10:59.929 }' 00:10:59.929 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.929 04:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.188 [2024-12-06 04:01:53.379156] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:00.188 [2024-12-06 04:01:53.379281] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.188 [2024-12-06 04:01:53.379331] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:00.188 [2024-12-06 04:01:53.379367] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.188 [2024-12-06 04:01:53.379842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.188 [2024-12-06 04:01:53.379867] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:00.188 [2024-12-06 04:01:53.379956] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:00.188 [2024-12-06 04:01:53.379985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:00.188 pt2 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.188 [2024-12-06 04:01:53.387144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.188 "name": "raid_bdev1", 00:11:00.188 "uuid": "b8fc94db-b093-4a1b-a1ac-9f2ca63220d9", 00:11:00.188 "strip_size_kb": 64, 00:11:00.188 "state": "configuring", 00:11:00.188 "raid_level": "concat", 00:11:00.188 "superblock": true, 00:11:00.188 "num_base_bdevs": 3, 00:11:00.188 "num_base_bdevs_discovered": 1, 00:11:00.188 "num_base_bdevs_operational": 3, 00:11:00.188 "base_bdevs_list": [ 00:11:00.188 { 00:11:00.188 "name": "pt1", 00:11:00.188 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:00.188 "is_configured": true, 00:11:00.188 "data_offset": 2048, 00:11:00.188 "data_size": 63488 00:11:00.188 }, 00:11:00.188 { 00:11:00.188 "name": null, 00:11:00.188 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.188 "is_configured": false, 00:11:00.188 "data_offset": 0, 00:11:00.188 "data_size": 63488 00:11:00.188 }, 00:11:00.188 { 00:11:00.188 "name": null, 00:11:00.188 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.188 "is_configured": false, 00:11:00.188 "data_offset": 2048, 00:11:00.188 "data_size": 63488 00:11:00.188 } 00:11:00.188 ] 00:11:00.188 }' 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.188 04:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.758 [2024-12-06 04:01:53.834345] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:00.758 [2024-12-06 04:01:53.834454] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.758 [2024-12-06 04:01:53.834489] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:00.758 [2024-12-06 04:01:53.834519] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.758 [2024-12-06 04:01:53.835030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.758 [2024-12-06 04:01:53.835111] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:00.758 [2024-12-06 04:01:53.835230] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:00.758 [2024-12-06 04:01:53.835286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:00.758 pt2 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.758 [2024-12-06 04:01:53.846307] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:00.758 [2024-12-06 04:01:53.846394] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.758 [2024-12-06 04:01:53.846424] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:00.758 [2024-12-06 04:01:53.846453] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.758 [2024-12-06 04:01:53.846894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.758 [2024-12-06 04:01:53.846958] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:00.758 [2024-12-06 04:01:53.847061] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:00.758 [2024-12-06 04:01:53.847116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:00.758 [2024-12-06 04:01:53.847273] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:00.758 [2024-12-06 04:01:53.847316] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:00.758 [2024-12-06 04:01:53.847602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:00.758 [2024-12-06 04:01:53.847790] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:00.758 [2024-12-06 04:01:53.847830] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:00.758 [2024-12-06 04:01:53.848011] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.758 pt3 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.758 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.758 "name": "raid_bdev1", 00:11:00.758 "uuid": "b8fc94db-b093-4a1b-a1ac-9f2ca63220d9", 00:11:00.758 "strip_size_kb": 64, 00:11:00.758 "state": "online", 00:11:00.758 "raid_level": "concat", 00:11:00.758 "superblock": true, 00:11:00.758 "num_base_bdevs": 3, 00:11:00.758 "num_base_bdevs_discovered": 3, 00:11:00.758 "num_base_bdevs_operational": 3, 00:11:00.758 "base_bdevs_list": [ 00:11:00.758 { 00:11:00.758 "name": "pt1", 00:11:00.758 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:00.758 "is_configured": true, 00:11:00.758 "data_offset": 2048, 00:11:00.759 "data_size": 63488 00:11:00.759 }, 00:11:00.759 { 00:11:00.759 "name": "pt2", 00:11:00.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.759 "is_configured": true, 00:11:00.759 "data_offset": 2048, 00:11:00.759 "data_size": 63488 00:11:00.759 }, 00:11:00.759 { 00:11:00.759 "name": "pt3", 00:11:00.759 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.759 "is_configured": true, 00:11:00.759 "data_offset": 2048, 00:11:00.759 "data_size": 63488 00:11:00.759 } 00:11:00.759 ] 00:11:00.759 }' 00:11:00.759 04:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.759 04:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.019 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:01.019 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:01.019 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:01.019 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:01.019 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:01.019 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:01.019 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:01.019 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:01.019 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.019 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.019 [2024-12-06 04:01:54.309900] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.019 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.019 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:01.019 "name": "raid_bdev1", 00:11:01.019 "aliases": [ 00:11:01.019 "b8fc94db-b093-4a1b-a1ac-9f2ca63220d9" 00:11:01.019 ], 00:11:01.019 "product_name": "Raid Volume", 00:11:01.019 "block_size": 512, 00:11:01.019 "num_blocks": 190464, 00:11:01.019 "uuid": "b8fc94db-b093-4a1b-a1ac-9f2ca63220d9", 00:11:01.019 "assigned_rate_limits": { 00:11:01.019 "rw_ios_per_sec": 0, 00:11:01.019 "rw_mbytes_per_sec": 0, 00:11:01.019 "r_mbytes_per_sec": 0, 00:11:01.019 "w_mbytes_per_sec": 0 00:11:01.019 }, 00:11:01.019 "claimed": false, 00:11:01.019 "zoned": false, 00:11:01.019 "supported_io_types": { 00:11:01.019 "read": true, 00:11:01.019 "write": true, 00:11:01.019 "unmap": true, 00:11:01.019 "flush": true, 00:11:01.019 "reset": true, 00:11:01.019 "nvme_admin": false, 00:11:01.019 "nvme_io": false, 00:11:01.019 "nvme_io_md": false, 00:11:01.019 "write_zeroes": true, 00:11:01.019 "zcopy": false, 00:11:01.019 "get_zone_info": false, 00:11:01.019 "zone_management": false, 00:11:01.019 "zone_append": false, 00:11:01.019 "compare": false, 00:11:01.019 "compare_and_write": false, 00:11:01.019 "abort": false, 00:11:01.019 "seek_hole": false, 00:11:01.019 "seek_data": false, 00:11:01.019 "copy": false, 00:11:01.019 "nvme_iov_md": false 00:11:01.019 }, 00:11:01.019 "memory_domains": [ 00:11:01.019 { 00:11:01.019 "dma_device_id": "system", 00:11:01.019 "dma_device_type": 1 00:11:01.019 }, 00:11:01.019 { 00:11:01.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.019 "dma_device_type": 2 00:11:01.019 }, 00:11:01.019 { 00:11:01.019 "dma_device_id": "system", 00:11:01.019 "dma_device_type": 1 00:11:01.019 }, 00:11:01.019 { 00:11:01.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.019 "dma_device_type": 2 00:11:01.019 }, 00:11:01.019 { 00:11:01.019 "dma_device_id": "system", 00:11:01.019 "dma_device_type": 1 00:11:01.019 }, 00:11:01.019 { 00:11:01.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.019 "dma_device_type": 2 00:11:01.019 } 00:11:01.019 ], 00:11:01.019 "driver_specific": { 00:11:01.019 "raid": { 00:11:01.019 "uuid": "b8fc94db-b093-4a1b-a1ac-9f2ca63220d9", 00:11:01.019 "strip_size_kb": 64, 00:11:01.019 "state": "online", 00:11:01.019 "raid_level": "concat", 00:11:01.019 "superblock": true, 00:11:01.019 "num_base_bdevs": 3, 00:11:01.019 "num_base_bdevs_discovered": 3, 00:11:01.019 "num_base_bdevs_operational": 3, 00:11:01.019 "base_bdevs_list": [ 00:11:01.019 { 00:11:01.019 "name": "pt1", 00:11:01.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.019 "is_configured": true, 00:11:01.019 "data_offset": 2048, 00:11:01.019 "data_size": 63488 00:11:01.019 }, 00:11:01.019 { 00:11:01.019 "name": "pt2", 00:11:01.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.019 "is_configured": true, 00:11:01.019 "data_offset": 2048, 00:11:01.019 "data_size": 63488 00:11:01.019 }, 00:11:01.019 { 00:11:01.019 "name": "pt3", 00:11:01.019 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:01.020 "is_configured": true, 00:11:01.020 "data_offset": 2048, 00:11:01.020 "data_size": 63488 00:11:01.020 } 00:11:01.020 ] 00:11:01.020 } 00:11:01.020 } 00:11:01.020 }' 00:11:01.020 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:01.020 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:01.020 pt2 00:11:01.020 pt3' 00:11:01.020 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.279 [2024-12-06 04:01:54.581393] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b8fc94db-b093-4a1b-a1ac-9f2ca63220d9 '!=' b8fc94db-b093-4a1b-a1ac-9f2ca63220d9 ']' 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66944 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66944 ']' 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66944 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.279 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66944 00:11:01.539 killing process with pid 66944 00:11:01.539 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.539 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.539 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66944' 00:11:01.539 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66944 00:11:01.539 [2024-12-06 04:01:54.646665] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:01.539 [2024-12-06 04:01:54.646757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.539 [2024-12-06 04:01:54.646821] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.539 [2024-12-06 04:01:54.646834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:01.539 04:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66944 00:11:01.799 [2024-12-06 04:01:54.963068] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.177 ************************************ 00:11:03.177 END TEST raid_superblock_test 00:11:03.177 ************************************ 00:11:03.177 04:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:03.177 00:11:03.177 real 0m5.246s 00:11:03.177 user 0m7.490s 00:11:03.177 sys 0m0.880s 00:11:03.177 04:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.177 04:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.177 04:01:56 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:11:03.177 04:01:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:03.177 04:01:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.177 04:01:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.177 ************************************ 00:11:03.177 START TEST raid_read_error_test 00:11:03.177 ************************************ 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.p9P6htf9fU 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67197 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67197 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67197 ']' 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.177 04:01:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.177 [2024-12-06 04:01:56.296854] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:11:03.177 [2024-12-06 04:01:56.296973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67197 ] 00:11:03.177 [2024-12-06 04:01:56.468747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.437 [2024-12-06 04:01:56.591474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.698 [2024-12-06 04:01:56.797878] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.698 [2024-12-06 04:01:56.797940] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.958 BaseBdev1_malloc 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.958 true 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.958 [2024-12-06 04:01:57.216366] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:03.958 [2024-12-06 04:01:57.216500] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.958 [2024-12-06 04:01:57.216542] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:03.958 [2024-12-06 04:01:57.216578] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.958 [2024-12-06 04:01:57.219010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.958 [2024-12-06 04:01:57.219102] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:03.958 BaseBdev1 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.958 BaseBdev2_malloc 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.958 true 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.958 [2024-12-06 04:01:57.284353] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:03.958 [2024-12-06 04:01:57.284524] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.958 [2024-12-06 04:01:57.284565] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:03.958 [2024-12-06 04:01:57.284599] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.958 [2024-12-06 04:01:57.287040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.958 [2024-12-06 04:01:57.287144] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:03.958 BaseBdev2 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.958 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.219 BaseBdev3_malloc 00:11:04.219 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.219 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:04.219 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.219 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.219 true 00:11:04.219 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.219 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:04.219 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.219 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.219 [2024-12-06 04:01:57.359755] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:04.219 [2024-12-06 04:01:57.359914] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.219 [2024-12-06 04:01:57.359955] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:04.219 [2024-12-06 04:01:57.359986] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.219 [2024-12-06 04:01:57.362280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.219 [2024-12-06 04:01:57.362386] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:04.219 BaseBdev3 00:11:04.219 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.219 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:04.219 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.219 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.220 [2024-12-06 04:01:57.371837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.220 [2024-12-06 04:01:57.373836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.220 [2024-12-06 04:01:57.373973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:04.220 [2024-12-06 04:01:57.374254] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:04.220 [2024-12-06 04:01:57.374305] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:04.220 [2024-12-06 04:01:57.374618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:04.220 [2024-12-06 04:01:57.374835] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:04.220 [2024-12-06 04:01:57.374880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:04.220 [2024-12-06 04:01:57.375149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.220 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.220 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:04.220 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.220 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.220 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.220 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.220 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:04.220 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.220 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.220 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.220 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.220 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.220 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.220 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.220 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.220 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.220 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.220 "name": "raid_bdev1", 00:11:04.220 "uuid": "a64bd707-84e9-4b40-826b-d0b8220d45fd", 00:11:04.220 "strip_size_kb": 64, 00:11:04.220 "state": "online", 00:11:04.220 "raid_level": "concat", 00:11:04.220 "superblock": true, 00:11:04.220 "num_base_bdevs": 3, 00:11:04.220 "num_base_bdevs_discovered": 3, 00:11:04.220 "num_base_bdevs_operational": 3, 00:11:04.220 "base_bdevs_list": [ 00:11:04.220 { 00:11:04.220 "name": "BaseBdev1", 00:11:04.220 "uuid": "0440e402-8b20-5869-9c98-073ad9ae56cc", 00:11:04.220 "is_configured": true, 00:11:04.220 "data_offset": 2048, 00:11:04.220 "data_size": 63488 00:11:04.220 }, 00:11:04.220 { 00:11:04.220 "name": "BaseBdev2", 00:11:04.220 "uuid": "fd603918-d929-5a0b-94a2-4aae3b0f2235", 00:11:04.220 "is_configured": true, 00:11:04.220 "data_offset": 2048, 00:11:04.220 "data_size": 63488 00:11:04.220 }, 00:11:04.220 { 00:11:04.220 "name": "BaseBdev3", 00:11:04.220 "uuid": "271dc717-c848-5f95-8d55-7ac41e2abacc", 00:11:04.220 "is_configured": true, 00:11:04.220 "data_offset": 2048, 00:11:04.220 "data_size": 63488 00:11:04.220 } 00:11:04.220 ] 00:11:04.220 }' 00:11:04.220 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.220 04:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.790 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:04.790 04:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:04.790 [2024-12-06 04:01:57.975951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.746 "name": "raid_bdev1", 00:11:05.746 "uuid": "a64bd707-84e9-4b40-826b-d0b8220d45fd", 00:11:05.746 "strip_size_kb": 64, 00:11:05.746 "state": "online", 00:11:05.746 "raid_level": "concat", 00:11:05.746 "superblock": true, 00:11:05.746 "num_base_bdevs": 3, 00:11:05.746 "num_base_bdevs_discovered": 3, 00:11:05.746 "num_base_bdevs_operational": 3, 00:11:05.746 "base_bdevs_list": [ 00:11:05.746 { 00:11:05.746 "name": "BaseBdev1", 00:11:05.746 "uuid": "0440e402-8b20-5869-9c98-073ad9ae56cc", 00:11:05.746 "is_configured": true, 00:11:05.746 "data_offset": 2048, 00:11:05.746 "data_size": 63488 00:11:05.746 }, 00:11:05.746 { 00:11:05.746 "name": "BaseBdev2", 00:11:05.746 "uuid": "fd603918-d929-5a0b-94a2-4aae3b0f2235", 00:11:05.746 "is_configured": true, 00:11:05.746 "data_offset": 2048, 00:11:05.746 "data_size": 63488 00:11:05.746 }, 00:11:05.746 { 00:11:05.746 "name": "BaseBdev3", 00:11:05.746 "uuid": "271dc717-c848-5f95-8d55-7ac41e2abacc", 00:11:05.746 "is_configured": true, 00:11:05.746 "data_offset": 2048, 00:11:05.746 "data_size": 63488 00:11:05.746 } 00:11:05.746 ] 00:11:05.746 }' 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.746 04:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.006 04:01:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:06.006 04:01:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.006 04:01:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.006 [2024-12-06 04:01:59.356714] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:06.006 [2024-12-06 04:01:59.356810] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.265 [2024-12-06 04:01:59.359798] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.265 [2024-12-06 04:01:59.359917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.265 [2024-12-06 04:01:59.359982] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.265 [2024-12-06 04:01:59.360031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:06.265 04:01:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.265 04:01:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67197 00:11:06.265 { 00:11:06.265 "results": [ 00:11:06.265 { 00:11:06.265 "job": "raid_bdev1", 00:11:06.265 "core_mask": "0x1", 00:11:06.265 "workload": "randrw", 00:11:06.265 "percentage": 50, 00:11:06.265 "status": "finished", 00:11:06.265 "queue_depth": 1, 00:11:06.265 "io_size": 131072, 00:11:06.265 "runtime": 1.381609, 00:11:06.265 "iops": 15329.952251324361, 00:11:06.265 "mibps": 1916.2440314155451, 00:11:06.265 "io_failed": 1, 00:11:06.265 "io_timeout": 0, 00:11:06.265 "avg_latency_us": 90.30040394198558, 00:11:06.265 "min_latency_us": 27.053275109170304, 00:11:06.265 "max_latency_us": 1452.380786026201 00:11:06.265 } 00:11:06.265 ], 00:11:06.265 "core_count": 1 00:11:06.265 } 00:11:06.265 04:01:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67197 ']' 00:11:06.265 04:01:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67197 00:11:06.265 04:01:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:06.265 04:01:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.265 04:01:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67197 00:11:06.265 killing process with pid 67197 00:11:06.265 04:01:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.265 04:01:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.265 04:01:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67197' 00:11:06.265 04:01:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67197 00:11:06.265 04:01:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67197 00:11:06.265 [2024-12-06 04:01:59.397328] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:06.525 [2024-12-06 04:01:59.636542] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:07.906 04:02:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.p9P6htf9fU 00:11:07.906 04:02:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:07.906 04:02:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:07.906 04:02:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:07.906 04:02:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:07.906 ************************************ 00:11:07.906 END TEST raid_read_error_test 00:11:07.906 ************************************ 00:11:07.906 04:02:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:07.906 04:02:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:07.906 04:02:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:07.906 00:11:07.906 real 0m4.666s 00:11:07.906 user 0m5.638s 00:11:07.906 sys 0m0.543s 00:11:07.906 04:02:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.906 04:02:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.906 04:02:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:07.906 04:02:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:07.906 04:02:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.906 04:02:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:07.906 ************************************ 00:11:07.906 START TEST raid_write_error_test 00:11:07.906 ************************************ 00:11:07.906 04:02:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:11:07.906 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:07.906 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:07.906 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:07.906 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:07.906 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.906 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:07.906 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.906 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.906 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:07.906 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.906 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.906 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:07.906 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.907 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.907 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:07.907 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:07.907 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:07.907 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:07.907 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:07.907 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:07.907 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:07.907 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:07.907 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:07.907 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:07.907 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:07.907 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aYvG6GksnZ 00:11:07.907 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67344 00:11:07.907 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:07.907 04:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67344 00:11:07.907 04:02:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67344 ']' 00:11:07.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.907 04:02:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.907 04:02:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.907 04:02:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.907 04:02:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.907 04:02:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.907 [2024-12-06 04:02:01.029069] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:11:07.907 [2024-12-06 04:02:01.029181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67344 ] 00:11:07.907 [2024-12-06 04:02:01.204872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.167 [2024-12-06 04:02:01.320824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.426 [2024-12-06 04:02:01.520176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.426 [2024-12-06 04:02:01.520236] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.689 BaseBdev1_malloc 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.689 true 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.689 [2024-12-06 04:02:01.919516] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:08.689 [2024-12-06 04:02:01.919570] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.689 [2024-12-06 04:02:01.919590] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:08.689 [2024-12-06 04:02:01.919601] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.689 [2024-12-06 04:02:01.921780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.689 [2024-12-06 04:02:01.921907] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:08.689 BaseBdev1 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.689 BaseBdev2_malloc 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.689 true 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.689 [2024-12-06 04:02:01.986529] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:08.689 [2024-12-06 04:02:01.986584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.689 [2024-12-06 04:02:01.986601] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:08.689 [2024-12-06 04:02:01.986611] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.689 [2024-12-06 04:02:01.988799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.689 [2024-12-06 04:02:01.988889] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:08.689 BaseBdev2 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.689 04:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.950 BaseBdev3_malloc 00:11:08.950 04:02:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.950 04:02:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:08.950 04:02:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.950 04:02:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.950 true 00:11:08.950 04:02:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.950 04:02:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:08.950 04:02:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.950 04:02:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.950 [2024-12-06 04:02:02.069762] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:08.950 [2024-12-06 04:02:02.069815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.950 [2024-12-06 04:02:02.069832] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:08.950 [2024-12-06 04:02:02.069842] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.950 [2024-12-06 04:02:02.072213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.950 [2024-12-06 04:02:02.072325] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:08.950 BaseBdev3 00:11:08.950 04:02:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.950 04:02:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:08.950 04:02:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.950 04:02:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.950 [2024-12-06 04:02:02.081820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:08.950 [2024-12-06 04:02:02.083752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:08.951 [2024-12-06 04:02:02.083824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:08.951 [2024-12-06 04:02:02.084022] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:08.951 [2024-12-06 04:02:02.084034] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:08.951 [2024-12-06 04:02:02.084326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:08.951 [2024-12-06 04:02:02.084505] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:08.951 [2024-12-06 04:02:02.084527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:08.951 [2024-12-06 04:02:02.084687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.951 04:02:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.951 04:02:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:08.951 04:02:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.951 04:02:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.951 04:02:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.951 04:02:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.951 04:02:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.951 04:02:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.951 04:02:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.951 04:02:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.951 04:02:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.951 04:02:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.951 04:02:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.951 04:02:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.951 04:02:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.951 04:02:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.951 04:02:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.951 "name": "raid_bdev1", 00:11:08.951 "uuid": "e212e23e-6bbf-4a23-b8f9-8ea50ba98c1d", 00:11:08.951 "strip_size_kb": 64, 00:11:08.951 "state": "online", 00:11:08.951 "raid_level": "concat", 00:11:08.951 "superblock": true, 00:11:08.951 "num_base_bdevs": 3, 00:11:08.951 "num_base_bdevs_discovered": 3, 00:11:08.951 "num_base_bdevs_operational": 3, 00:11:08.951 "base_bdevs_list": [ 00:11:08.951 { 00:11:08.951 "name": "BaseBdev1", 00:11:08.951 "uuid": "eb74c9a9-9bb5-588d-8aa6-51fdf228c8e0", 00:11:08.951 "is_configured": true, 00:11:08.951 "data_offset": 2048, 00:11:08.951 "data_size": 63488 00:11:08.951 }, 00:11:08.951 { 00:11:08.951 "name": "BaseBdev2", 00:11:08.951 "uuid": "d418508e-fd70-5748-8a4a-5775360c9e4d", 00:11:08.951 "is_configured": true, 00:11:08.951 "data_offset": 2048, 00:11:08.951 "data_size": 63488 00:11:08.951 }, 00:11:08.951 { 00:11:08.951 "name": "BaseBdev3", 00:11:08.951 "uuid": "97de134b-0269-55c7-8453-f6712b348069", 00:11:08.951 "is_configured": true, 00:11:08.951 "data_offset": 2048, 00:11:08.951 "data_size": 63488 00:11:08.951 } 00:11:08.951 ] 00:11:08.951 }' 00:11:08.951 04:02:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.951 04:02:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.211 04:02:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:09.211 04:02:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:09.470 [2024-12-06 04:02:02.622292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.410 "name": "raid_bdev1", 00:11:10.410 "uuid": "e212e23e-6bbf-4a23-b8f9-8ea50ba98c1d", 00:11:10.410 "strip_size_kb": 64, 00:11:10.410 "state": "online", 00:11:10.410 "raid_level": "concat", 00:11:10.410 "superblock": true, 00:11:10.410 "num_base_bdevs": 3, 00:11:10.410 "num_base_bdevs_discovered": 3, 00:11:10.410 "num_base_bdevs_operational": 3, 00:11:10.410 "base_bdevs_list": [ 00:11:10.410 { 00:11:10.410 "name": "BaseBdev1", 00:11:10.410 "uuid": "eb74c9a9-9bb5-588d-8aa6-51fdf228c8e0", 00:11:10.410 "is_configured": true, 00:11:10.410 "data_offset": 2048, 00:11:10.410 "data_size": 63488 00:11:10.410 }, 00:11:10.410 { 00:11:10.410 "name": "BaseBdev2", 00:11:10.410 "uuid": "d418508e-fd70-5748-8a4a-5775360c9e4d", 00:11:10.410 "is_configured": true, 00:11:10.410 "data_offset": 2048, 00:11:10.410 "data_size": 63488 00:11:10.410 }, 00:11:10.410 { 00:11:10.410 "name": "BaseBdev3", 00:11:10.410 "uuid": "97de134b-0269-55c7-8453-f6712b348069", 00:11:10.410 "is_configured": true, 00:11:10.410 "data_offset": 2048, 00:11:10.410 "data_size": 63488 00:11:10.410 } 00:11:10.410 ] 00:11:10.410 }' 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.410 04:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.669 04:02:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:10.669 04:02:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.669 04:02:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.669 [2024-12-06 04:02:04.018643] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.669 [2024-12-06 04:02:04.018722] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.669 [2024-12-06 04:02:04.021812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.669 [2024-12-06 04:02:04.021897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.669 [2024-12-06 04:02:04.021955] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.669 [2024-12-06 04:02:04.021998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:10.928 04:02:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.928 { 00:11:10.928 "results": [ 00:11:10.928 { 00:11:10.928 "job": "raid_bdev1", 00:11:10.928 "core_mask": "0x1", 00:11:10.928 "workload": "randrw", 00:11:10.928 "percentage": 50, 00:11:10.928 "status": "finished", 00:11:10.928 "queue_depth": 1, 00:11:10.928 "io_size": 131072, 00:11:10.928 "runtime": 1.397178, 00:11:10.928 "iops": 14314.568365662786, 00:11:10.928 "mibps": 1789.3210457078483, 00:11:10.928 "io_failed": 1, 00:11:10.928 "io_timeout": 0, 00:11:10.928 "avg_latency_us": 96.51711231032334, 00:11:10.928 "min_latency_us": 28.618340611353712, 00:11:10.928 "max_latency_us": 1602.6270742358079 00:11:10.928 } 00:11:10.928 ], 00:11:10.928 "core_count": 1 00:11:10.928 } 00:11:10.928 04:02:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67344 00:11:10.928 04:02:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67344 ']' 00:11:10.928 04:02:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67344 00:11:10.928 04:02:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:10.928 04:02:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.928 04:02:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67344 00:11:10.928 killing process with pid 67344 00:11:10.928 04:02:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.928 04:02:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.928 04:02:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67344' 00:11:10.928 04:02:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67344 00:11:10.928 [2024-12-06 04:02:04.067137] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:10.928 04:02:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67344 00:11:11.189 [2024-12-06 04:02:04.313983] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.566 04:02:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aYvG6GksnZ 00:11:12.566 04:02:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:12.566 04:02:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:12.566 04:02:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:12.566 04:02:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:12.566 04:02:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:12.566 04:02:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:12.566 04:02:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:12.566 00:11:12.566 real 0m4.650s 00:11:12.566 user 0m5.525s 00:11:12.566 sys 0m0.564s 00:11:12.566 ************************************ 00:11:12.566 END TEST raid_write_error_test 00:11:12.566 ************************************ 00:11:12.566 04:02:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.566 04:02:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.566 04:02:05 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:12.566 04:02:05 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:12.566 04:02:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:12.566 04:02:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.566 04:02:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:12.566 ************************************ 00:11:12.566 START TEST raid_state_function_test 00:11:12.566 ************************************ 00:11:12.566 04:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:11:12.566 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:12.566 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:12.566 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:12.566 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:12.566 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:12.566 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.566 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:12.566 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.566 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.566 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:12.566 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.566 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.566 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:12.566 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.566 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.566 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:12.566 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:12.566 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:12.566 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:12.566 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:12.566 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:12.566 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:12.567 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:12.567 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:12.567 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:12.567 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67486 00:11:12.567 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:12.567 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67486' 00:11:12.567 Process raid pid: 67486 00:11:12.567 04:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67486 00:11:12.567 04:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67486 ']' 00:11:12.567 04:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.567 04:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.567 04:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.567 04:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.567 04:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.567 [2024-12-06 04:02:05.748092] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:11:12.567 [2024-12-06 04:02:05.748225] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.826 [2024-12-06 04:02:05.928908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.826 [2024-12-06 04:02:06.044875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.085 [2024-12-06 04:02:06.250680] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.085 [2024-12-06 04:02:06.250717] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.343 04:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.343 04:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:13.343 04:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:13.343 04:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.343 04:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.343 [2024-12-06 04:02:06.608859] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:13.343 [2024-12-06 04:02:06.609014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:13.343 [2024-12-06 04:02:06.609030] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.343 [2024-12-06 04:02:06.609040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.343 [2024-12-06 04:02:06.609057] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.343 [2024-12-06 04:02:06.609066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.343 04:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.343 04:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:13.343 04:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.343 04:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.344 04:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.344 04:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.344 04:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.344 04:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.344 04:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.344 04:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.344 04:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.344 04:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.344 04:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.344 04:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.344 04:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.344 04:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.344 04:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.344 "name": "Existed_Raid", 00:11:13.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.344 "strip_size_kb": 0, 00:11:13.344 "state": "configuring", 00:11:13.344 "raid_level": "raid1", 00:11:13.344 "superblock": false, 00:11:13.344 "num_base_bdevs": 3, 00:11:13.344 "num_base_bdevs_discovered": 0, 00:11:13.344 "num_base_bdevs_operational": 3, 00:11:13.344 "base_bdevs_list": [ 00:11:13.344 { 00:11:13.344 "name": "BaseBdev1", 00:11:13.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.344 "is_configured": false, 00:11:13.344 "data_offset": 0, 00:11:13.344 "data_size": 0 00:11:13.344 }, 00:11:13.344 { 00:11:13.344 "name": "BaseBdev2", 00:11:13.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.344 "is_configured": false, 00:11:13.344 "data_offset": 0, 00:11:13.344 "data_size": 0 00:11:13.344 }, 00:11:13.344 { 00:11:13.344 "name": "BaseBdev3", 00:11:13.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.344 "is_configured": false, 00:11:13.344 "data_offset": 0, 00:11:13.344 "data_size": 0 00:11:13.344 } 00:11:13.344 ] 00:11:13.344 }' 00:11:13.344 04:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.344 04:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.912 [2024-12-06 04:02:07.048090] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:13.912 [2024-12-06 04:02:07.048185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.912 [2024-12-06 04:02:07.060039] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:13.912 [2024-12-06 04:02:07.060134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:13.912 [2024-12-06 04:02:07.060180] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.912 [2024-12-06 04:02:07.060206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.912 [2024-12-06 04:02:07.060227] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.912 [2024-12-06 04:02:07.060251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.912 [2024-12-06 04:02:07.107553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.912 BaseBdev1 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.912 [ 00:11:13.912 { 00:11:13.912 "name": "BaseBdev1", 00:11:13.912 "aliases": [ 00:11:13.912 "63d2b3f5-50d1-4bd9-800c-508539d5d51d" 00:11:13.912 ], 00:11:13.912 "product_name": "Malloc disk", 00:11:13.912 "block_size": 512, 00:11:13.912 "num_blocks": 65536, 00:11:13.912 "uuid": "63d2b3f5-50d1-4bd9-800c-508539d5d51d", 00:11:13.912 "assigned_rate_limits": { 00:11:13.912 "rw_ios_per_sec": 0, 00:11:13.912 "rw_mbytes_per_sec": 0, 00:11:13.912 "r_mbytes_per_sec": 0, 00:11:13.912 "w_mbytes_per_sec": 0 00:11:13.912 }, 00:11:13.912 "claimed": true, 00:11:13.912 "claim_type": "exclusive_write", 00:11:13.912 "zoned": false, 00:11:13.912 "supported_io_types": { 00:11:13.912 "read": true, 00:11:13.912 "write": true, 00:11:13.912 "unmap": true, 00:11:13.912 "flush": true, 00:11:13.912 "reset": true, 00:11:13.912 "nvme_admin": false, 00:11:13.912 "nvme_io": false, 00:11:13.912 "nvme_io_md": false, 00:11:13.912 "write_zeroes": true, 00:11:13.912 "zcopy": true, 00:11:13.912 "get_zone_info": false, 00:11:13.912 "zone_management": false, 00:11:13.912 "zone_append": false, 00:11:13.912 "compare": false, 00:11:13.912 "compare_and_write": false, 00:11:13.912 "abort": true, 00:11:13.912 "seek_hole": false, 00:11:13.912 "seek_data": false, 00:11:13.912 "copy": true, 00:11:13.912 "nvme_iov_md": false 00:11:13.912 }, 00:11:13.912 "memory_domains": [ 00:11:13.912 { 00:11:13.912 "dma_device_id": "system", 00:11:13.912 "dma_device_type": 1 00:11:13.912 }, 00:11:13.912 { 00:11:13.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.912 "dma_device_type": 2 00:11:13.912 } 00:11:13.912 ], 00:11:13.912 "driver_specific": {} 00:11:13.912 } 00:11:13.912 ] 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:13.912 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:13.913 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.913 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.913 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.913 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.913 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.913 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.913 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.913 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.913 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.913 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.913 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.913 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.913 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.913 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.913 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.913 "name": "Existed_Raid", 00:11:13.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.913 "strip_size_kb": 0, 00:11:13.913 "state": "configuring", 00:11:13.913 "raid_level": "raid1", 00:11:13.913 "superblock": false, 00:11:13.913 "num_base_bdevs": 3, 00:11:13.913 "num_base_bdevs_discovered": 1, 00:11:13.913 "num_base_bdevs_operational": 3, 00:11:13.913 "base_bdevs_list": [ 00:11:13.913 { 00:11:13.913 "name": "BaseBdev1", 00:11:13.913 "uuid": "63d2b3f5-50d1-4bd9-800c-508539d5d51d", 00:11:13.913 "is_configured": true, 00:11:13.913 "data_offset": 0, 00:11:13.913 "data_size": 65536 00:11:13.913 }, 00:11:13.913 { 00:11:13.913 "name": "BaseBdev2", 00:11:13.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.913 "is_configured": false, 00:11:13.913 "data_offset": 0, 00:11:13.913 "data_size": 0 00:11:13.913 }, 00:11:13.913 { 00:11:13.913 "name": "BaseBdev3", 00:11:13.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.913 "is_configured": false, 00:11:13.913 "data_offset": 0, 00:11:13.913 "data_size": 0 00:11:13.913 } 00:11:13.913 ] 00:11:13.913 }' 00:11:13.913 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.913 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.480 [2024-12-06 04:02:07.618783] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.480 [2024-12-06 04:02:07.618842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.480 [2024-12-06 04:02:07.630800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.480 [2024-12-06 04:02:07.632804] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.480 [2024-12-06 04:02:07.632856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.480 [2024-12-06 04:02:07.632867] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:14.480 [2024-12-06 04:02:07.632877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.480 "name": "Existed_Raid", 00:11:14.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.480 "strip_size_kb": 0, 00:11:14.480 "state": "configuring", 00:11:14.480 "raid_level": "raid1", 00:11:14.480 "superblock": false, 00:11:14.480 "num_base_bdevs": 3, 00:11:14.480 "num_base_bdevs_discovered": 1, 00:11:14.480 "num_base_bdevs_operational": 3, 00:11:14.480 "base_bdevs_list": [ 00:11:14.480 { 00:11:14.480 "name": "BaseBdev1", 00:11:14.480 "uuid": "63d2b3f5-50d1-4bd9-800c-508539d5d51d", 00:11:14.480 "is_configured": true, 00:11:14.480 "data_offset": 0, 00:11:14.480 "data_size": 65536 00:11:14.480 }, 00:11:14.480 { 00:11:14.480 "name": "BaseBdev2", 00:11:14.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.480 "is_configured": false, 00:11:14.480 "data_offset": 0, 00:11:14.480 "data_size": 0 00:11:14.480 }, 00:11:14.480 { 00:11:14.480 "name": "BaseBdev3", 00:11:14.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.480 "is_configured": false, 00:11:14.480 "data_offset": 0, 00:11:14.480 "data_size": 0 00:11:14.480 } 00:11:14.480 ] 00:11:14.480 }' 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.480 04:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.869 [2024-12-06 04:02:08.114146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:14.869 BaseBdev2 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.869 [ 00:11:14.869 { 00:11:14.869 "name": "BaseBdev2", 00:11:14.869 "aliases": [ 00:11:14.869 "9dce97de-ad06-415c-84e9-ed7912cf08bd" 00:11:14.869 ], 00:11:14.869 "product_name": "Malloc disk", 00:11:14.869 "block_size": 512, 00:11:14.869 "num_blocks": 65536, 00:11:14.869 "uuid": "9dce97de-ad06-415c-84e9-ed7912cf08bd", 00:11:14.869 "assigned_rate_limits": { 00:11:14.869 "rw_ios_per_sec": 0, 00:11:14.869 "rw_mbytes_per_sec": 0, 00:11:14.869 "r_mbytes_per_sec": 0, 00:11:14.869 "w_mbytes_per_sec": 0 00:11:14.869 }, 00:11:14.869 "claimed": true, 00:11:14.869 "claim_type": "exclusive_write", 00:11:14.869 "zoned": false, 00:11:14.869 "supported_io_types": { 00:11:14.869 "read": true, 00:11:14.869 "write": true, 00:11:14.869 "unmap": true, 00:11:14.869 "flush": true, 00:11:14.869 "reset": true, 00:11:14.869 "nvme_admin": false, 00:11:14.869 "nvme_io": false, 00:11:14.869 "nvme_io_md": false, 00:11:14.869 "write_zeroes": true, 00:11:14.869 "zcopy": true, 00:11:14.869 "get_zone_info": false, 00:11:14.869 "zone_management": false, 00:11:14.869 "zone_append": false, 00:11:14.869 "compare": false, 00:11:14.869 "compare_and_write": false, 00:11:14.869 "abort": true, 00:11:14.869 "seek_hole": false, 00:11:14.869 "seek_data": false, 00:11:14.869 "copy": true, 00:11:14.869 "nvme_iov_md": false 00:11:14.869 }, 00:11:14.869 "memory_domains": [ 00:11:14.869 { 00:11:14.869 "dma_device_id": "system", 00:11:14.869 "dma_device_type": 1 00:11:14.869 }, 00:11:14.869 { 00:11:14.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.869 "dma_device_type": 2 00:11:14.869 } 00:11:14.869 ], 00:11:14.869 "driver_specific": {} 00:11:14.869 } 00:11:14.869 ] 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.869 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.870 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.870 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.870 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.870 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.870 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.870 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.870 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.870 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.870 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.870 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.870 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.870 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.156 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.156 "name": "Existed_Raid", 00:11:15.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.156 "strip_size_kb": 0, 00:11:15.156 "state": "configuring", 00:11:15.156 "raid_level": "raid1", 00:11:15.156 "superblock": false, 00:11:15.156 "num_base_bdevs": 3, 00:11:15.156 "num_base_bdevs_discovered": 2, 00:11:15.156 "num_base_bdevs_operational": 3, 00:11:15.156 "base_bdevs_list": [ 00:11:15.156 { 00:11:15.156 "name": "BaseBdev1", 00:11:15.156 "uuid": "63d2b3f5-50d1-4bd9-800c-508539d5d51d", 00:11:15.156 "is_configured": true, 00:11:15.156 "data_offset": 0, 00:11:15.156 "data_size": 65536 00:11:15.156 }, 00:11:15.156 { 00:11:15.156 "name": "BaseBdev2", 00:11:15.156 "uuid": "9dce97de-ad06-415c-84e9-ed7912cf08bd", 00:11:15.156 "is_configured": true, 00:11:15.156 "data_offset": 0, 00:11:15.156 "data_size": 65536 00:11:15.156 }, 00:11:15.156 { 00:11:15.156 "name": "BaseBdev3", 00:11:15.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.156 "is_configured": false, 00:11:15.156 "data_offset": 0, 00:11:15.156 "data_size": 0 00:11:15.156 } 00:11:15.156 ] 00:11:15.156 }' 00:11:15.156 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.156 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.416 [2024-12-06 04:02:08.653848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:15.416 [2024-12-06 04:02:08.653972] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:15.416 [2024-12-06 04:02:08.654006] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:15.416 [2024-12-06 04:02:08.654353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:15.416 [2024-12-06 04:02:08.654583] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:15.416 [2024-12-06 04:02:08.654628] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:15.416 [2024-12-06 04:02:08.654973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.416 BaseBdev3 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.416 [ 00:11:15.416 { 00:11:15.416 "name": "BaseBdev3", 00:11:15.416 "aliases": [ 00:11:15.416 "0cd2bee4-0d51-47d4-8244-8aafbba288a9" 00:11:15.416 ], 00:11:15.416 "product_name": "Malloc disk", 00:11:15.416 "block_size": 512, 00:11:15.416 "num_blocks": 65536, 00:11:15.416 "uuid": "0cd2bee4-0d51-47d4-8244-8aafbba288a9", 00:11:15.416 "assigned_rate_limits": { 00:11:15.416 "rw_ios_per_sec": 0, 00:11:15.416 "rw_mbytes_per_sec": 0, 00:11:15.416 "r_mbytes_per_sec": 0, 00:11:15.416 "w_mbytes_per_sec": 0 00:11:15.416 }, 00:11:15.416 "claimed": true, 00:11:15.416 "claim_type": "exclusive_write", 00:11:15.416 "zoned": false, 00:11:15.416 "supported_io_types": { 00:11:15.416 "read": true, 00:11:15.416 "write": true, 00:11:15.416 "unmap": true, 00:11:15.416 "flush": true, 00:11:15.416 "reset": true, 00:11:15.416 "nvme_admin": false, 00:11:15.416 "nvme_io": false, 00:11:15.416 "nvme_io_md": false, 00:11:15.416 "write_zeroes": true, 00:11:15.416 "zcopy": true, 00:11:15.416 "get_zone_info": false, 00:11:15.416 "zone_management": false, 00:11:15.416 "zone_append": false, 00:11:15.416 "compare": false, 00:11:15.416 "compare_and_write": false, 00:11:15.416 "abort": true, 00:11:15.416 "seek_hole": false, 00:11:15.416 "seek_data": false, 00:11:15.416 "copy": true, 00:11:15.416 "nvme_iov_md": false 00:11:15.416 }, 00:11:15.416 "memory_domains": [ 00:11:15.416 { 00:11:15.416 "dma_device_id": "system", 00:11:15.416 "dma_device_type": 1 00:11:15.416 }, 00:11:15.416 { 00:11:15.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.416 "dma_device_type": 2 00:11:15.416 } 00:11:15.416 ], 00:11:15.416 "driver_specific": {} 00:11:15.416 } 00:11:15.416 ] 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.416 "name": "Existed_Raid", 00:11:15.416 "uuid": "f5a9d6c0-c18c-4338-a23d-5ba6952f4401", 00:11:15.416 "strip_size_kb": 0, 00:11:15.416 "state": "online", 00:11:15.416 "raid_level": "raid1", 00:11:15.416 "superblock": false, 00:11:15.416 "num_base_bdevs": 3, 00:11:15.416 "num_base_bdevs_discovered": 3, 00:11:15.416 "num_base_bdevs_operational": 3, 00:11:15.416 "base_bdevs_list": [ 00:11:15.416 { 00:11:15.416 "name": "BaseBdev1", 00:11:15.416 "uuid": "63d2b3f5-50d1-4bd9-800c-508539d5d51d", 00:11:15.416 "is_configured": true, 00:11:15.416 "data_offset": 0, 00:11:15.416 "data_size": 65536 00:11:15.416 }, 00:11:15.416 { 00:11:15.416 "name": "BaseBdev2", 00:11:15.416 "uuid": "9dce97de-ad06-415c-84e9-ed7912cf08bd", 00:11:15.416 "is_configured": true, 00:11:15.416 "data_offset": 0, 00:11:15.416 "data_size": 65536 00:11:15.416 }, 00:11:15.416 { 00:11:15.416 "name": "BaseBdev3", 00:11:15.416 "uuid": "0cd2bee4-0d51-47d4-8244-8aafbba288a9", 00:11:15.416 "is_configured": true, 00:11:15.416 "data_offset": 0, 00:11:15.416 "data_size": 65536 00:11:15.416 } 00:11:15.416 ] 00:11:15.416 }' 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.416 04:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.986 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:15.986 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:15.986 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:15.986 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:15.986 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:15.986 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:15.986 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:15.986 04:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.986 04:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.986 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:15.986 [2024-12-06 04:02:09.145465] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.986 04:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.986 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:15.986 "name": "Existed_Raid", 00:11:15.986 "aliases": [ 00:11:15.986 "f5a9d6c0-c18c-4338-a23d-5ba6952f4401" 00:11:15.986 ], 00:11:15.986 "product_name": "Raid Volume", 00:11:15.986 "block_size": 512, 00:11:15.986 "num_blocks": 65536, 00:11:15.986 "uuid": "f5a9d6c0-c18c-4338-a23d-5ba6952f4401", 00:11:15.986 "assigned_rate_limits": { 00:11:15.986 "rw_ios_per_sec": 0, 00:11:15.986 "rw_mbytes_per_sec": 0, 00:11:15.986 "r_mbytes_per_sec": 0, 00:11:15.986 "w_mbytes_per_sec": 0 00:11:15.986 }, 00:11:15.986 "claimed": false, 00:11:15.986 "zoned": false, 00:11:15.986 "supported_io_types": { 00:11:15.986 "read": true, 00:11:15.986 "write": true, 00:11:15.986 "unmap": false, 00:11:15.986 "flush": false, 00:11:15.986 "reset": true, 00:11:15.986 "nvme_admin": false, 00:11:15.986 "nvme_io": false, 00:11:15.986 "nvme_io_md": false, 00:11:15.986 "write_zeroes": true, 00:11:15.986 "zcopy": false, 00:11:15.986 "get_zone_info": false, 00:11:15.986 "zone_management": false, 00:11:15.986 "zone_append": false, 00:11:15.986 "compare": false, 00:11:15.986 "compare_and_write": false, 00:11:15.986 "abort": false, 00:11:15.986 "seek_hole": false, 00:11:15.986 "seek_data": false, 00:11:15.986 "copy": false, 00:11:15.986 "nvme_iov_md": false 00:11:15.986 }, 00:11:15.986 "memory_domains": [ 00:11:15.986 { 00:11:15.986 "dma_device_id": "system", 00:11:15.986 "dma_device_type": 1 00:11:15.986 }, 00:11:15.986 { 00:11:15.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.986 "dma_device_type": 2 00:11:15.986 }, 00:11:15.986 { 00:11:15.987 "dma_device_id": "system", 00:11:15.987 "dma_device_type": 1 00:11:15.987 }, 00:11:15.987 { 00:11:15.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.987 "dma_device_type": 2 00:11:15.987 }, 00:11:15.987 { 00:11:15.987 "dma_device_id": "system", 00:11:15.987 "dma_device_type": 1 00:11:15.987 }, 00:11:15.987 { 00:11:15.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.987 "dma_device_type": 2 00:11:15.987 } 00:11:15.987 ], 00:11:15.987 "driver_specific": { 00:11:15.987 "raid": { 00:11:15.987 "uuid": "f5a9d6c0-c18c-4338-a23d-5ba6952f4401", 00:11:15.987 "strip_size_kb": 0, 00:11:15.987 "state": "online", 00:11:15.987 "raid_level": "raid1", 00:11:15.987 "superblock": false, 00:11:15.987 "num_base_bdevs": 3, 00:11:15.987 "num_base_bdevs_discovered": 3, 00:11:15.987 "num_base_bdevs_operational": 3, 00:11:15.987 "base_bdevs_list": [ 00:11:15.987 { 00:11:15.987 "name": "BaseBdev1", 00:11:15.987 "uuid": "63d2b3f5-50d1-4bd9-800c-508539d5d51d", 00:11:15.987 "is_configured": true, 00:11:15.987 "data_offset": 0, 00:11:15.987 "data_size": 65536 00:11:15.987 }, 00:11:15.987 { 00:11:15.987 "name": "BaseBdev2", 00:11:15.987 "uuid": "9dce97de-ad06-415c-84e9-ed7912cf08bd", 00:11:15.987 "is_configured": true, 00:11:15.987 "data_offset": 0, 00:11:15.987 "data_size": 65536 00:11:15.987 }, 00:11:15.987 { 00:11:15.987 "name": "BaseBdev3", 00:11:15.987 "uuid": "0cd2bee4-0d51-47d4-8244-8aafbba288a9", 00:11:15.987 "is_configured": true, 00:11:15.987 "data_offset": 0, 00:11:15.987 "data_size": 65536 00:11:15.987 } 00:11:15.987 ] 00:11:15.987 } 00:11:15.987 } 00:11:15.987 }' 00:11:15.987 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:15.987 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:15.987 BaseBdev2 00:11:15.987 BaseBdev3' 00:11:15.987 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.987 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:15.987 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.987 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:15.987 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.987 04:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.987 04:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.987 04:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.987 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.987 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.987 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.987 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:15.987 04:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.987 04:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.987 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.987 04:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.246 [2024-12-06 04:02:09.392706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.246 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.247 "name": "Existed_Raid", 00:11:16.247 "uuid": "f5a9d6c0-c18c-4338-a23d-5ba6952f4401", 00:11:16.247 "strip_size_kb": 0, 00:11:16.247 "state": "online", 00:11:16.247 "raid_level": "raid1", 00:11:16.247 "superblock": false, 00:11:16.247 "num_base_bdevs": 3, 00:11:16.247 "num_base_bdevs_discovered": 2, 00:11:16.247 "num_base_bdevs_operational": 2, 00:11:16.247 "base_bdevs_list": [ 00:11:16.247 { 00:11:16.247 "name": null, 00:11:16.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.247 "is_configured": false, 00:11:16.247 "data_offset": 0, 00:11:16.247 "data_size": 65536 00:11:16.247 }, 00:11:16.247 { 00:11:16.247 "name": "BaseBdev2", 00:11:16.247 "uuid": "9dce97de-ad06-415c-84e9-ed7912cf08bd", 00:11:16.247 "is_configured": true, 00:11:16.247 "data_offset": 0, 00:11:16.247 "data_size": 65536 00:11:16.247 }, 00:11:16.247 { 00:11:16.247 "name": "BaseBdev3", 00:11:16.247 "uuid": "0cd2bee4-0d51-47d4-8244-8aafbba288a9", 00:11:16.247 "is_configured": true, 00:11:16.247 "data_offset": 0, 00:11:16.247 "data_size": 65536 00:11:16.247 } 00:11:16.247 ] 00:11:16.247 }' 00:11:16.247 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.247 04:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.816 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:16.816 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.816 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.816 04:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:16.816 04:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.816 04:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.816 04:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.816 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:16.816 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.816 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:16.816 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.816 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.816 [2024-12-06 04:02:10.012405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:16.816 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.816 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:16.816 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.816 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:16.816 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.816 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.816 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.816 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.816 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:16.816 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.816 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:16.816 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.816 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.076 [2024-12-06 04:02:10.171669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:17.076 [2024-12-06 04:02:10.171772] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.076 [2024-12-06 04:02:10.280953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.076 [2024-12-06 04:02:10.281131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.076 [2024-12-06 04:02:10.281184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.076 BaseBdev2 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.076 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.076 [ 00:11:17.076 { 00:11:17.076 "name": "BaseBdev2", 00:11:17.076 "aliases": [ 00:11:17.076 "5d4c92e3-7e74-484f-82a9-8617b3da1b9c" 00:11:17.076 ], 00:11:17.076 "product_name": "Malloc disk", 00:11:17.076 "block_size": 512, 00:11:17.076 "num_blocks": 65536, 00:11:17.076 "uuid": "5d4c92e3-7e74-484f-82a9-8617b3da1b9c", 00:11:17.076 "assigned_rate_limits": { 00:11:17.076 "rw_ios_per_sec": 0, 00:11:17.076 "rw_mbytes_per_sec": 0, 00:11:17.076 "r_mbytes_per_sec": 0, 00:11:17.076 "w_mbytes_per_sec": 0 00:11:17.076 }, 00:11:17.076 "claimed": false, 00:11:17.076 "zoned": false, 00:11:17.076 "supported_io_types": { 00:11:17.076 "read": true, 00:11:17.076 "write": true, 00:11:17.076 "unmap": true, 00:11:17.076 "flush": true, 00:11:17.076 "reset": true, 00:11:17.076 "nvme_admin": false, 00:11:17.076 "nvme_io": false, 00:11:17.076 "nvme_io_md": false, 00:11:17.076 "write_zeroes": true, 00:11:17.076 "zcopy": true, 00:11:17.076 "get_zone_info": false, 00:11:17.076 "zone_management": false, 00:11:17.076 "zone_append": false, 00:11:17.076 "compare": false, 00:11:17.076 "compare_and_write": false, 00:11:17.076 "abort": true, 00:11:17.076 "seek_hole": false, 00:11:17.076 "seek_data": false, 00:11:17.076 "copy": true, 00:11:17.076 "nvme_iov_md": false 00:11:17.076 }, 00:11:17.076 "memory_domains": [ 00:11:17.076 { 00:11:17.076 "dma_device_id": "system", 00:11:17.076 "dma_device_type": 1 00:11:17.076 }, 00:11:17.076 { 00:11:17.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.077 "dma_device_type": 2 00:11:17.077 } 00:11:17.077 ], 00:11:17.077 "driver_specific": {} 00:11:17.077 } 00:11:17.077 ] 00:11:17.077 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.077 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:17.077 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:17.077 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:17.077 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:17.077 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.077 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.337 BaseBdev3 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.337 [ 00:11:17.337 { 00:11:17.337 "name": "BaseBdev3", 00:11:17.337 "aliases": [ 00:11:17.337 "d0b23a10-4176-40f2-b177-2d846660c5a4" 00:11:17.337 ], 00:11:17.337 "product_name": "Malloc disk", 00:11:17.337 "block_size": 512, 00:11:17.337 "num_blocks": 65536, 00:11:17.337 "uuid": "d0b23a10-4176-40f2-b177-2d846660c5a4", 00:11:17.337 "assigned_rate_limits": { 00:11:17.337 "rw_ios_per_sec": 0, 00:11:17.337 "rw_mbytes_per_sec": 0, 00:11:17.337 "r_mbytes_per_sec": 0, 00:11:17.337 "w_mbytes_per_sec": 0 00:11:17.337 }, 00:11:17.337 "claimed": false, 00:11:17.337 "zoned": false, 00:11:17.337 "supported_io_types": { 00:11:17.337 "read": true, 00:11:17.337 "write": true, 00:11:17.337 "unmap": true, 00:11:17.337 "flush": true, 00:11:17.337 "reset": true, 00:11:17.337 "nvme_admin": false, 00:11:17.337 "nvme_io": false, 00:11:17.337 "nvme_io_md": false, 00:11:17.337 "write_zeroes": true, 00:11:17.337 "zcopy": true, 00:11:17.337 "get_zone_info": false, 00:11:17.337 "zone_management": false, 00:11:17.337 "zone_append": false, 00:11:17.337 "compare": false, 00:11:17.337 "compare_and_write": false, 00:11:17.337 "abort": true, 00:11:17.337 "seek_hole": false, 00:11:17.337 "seek_data": false, 00:11:17.337 "copy": true, 00:11:17.337 "nvme_iov_md": false 00:11:17.337 }, 00:11:17.337 "memory_domains": [ 00:11:17.337 { 00:11:17.337 "dma_device_id": "system", 00:11:17.337 "dma_device_type": 1 00:11:17.337 }, 00:11:17.337 { 00:11:17.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.337 "dma_device_type": 2 00:11:17.337 } 00:11:17.337 ], 00:11:17.337 "driver_specific": {} 00:11:17.337 } 00:11:17.337 ] 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.337 [2024-12-06 04:02:10.471990] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:17.337 [2024-12-06 04:02:10.472136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:17.337 [2024-12-06 04:02:10.472197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.337 [2024-12-06 04:02:10.474352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.337 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.338 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.338 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.338 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.338 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.338 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.338 "name": "Existed_Raid", 00:11:17.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.338 "strip_size_kb": 0, 00:11:17.338 "state": "configuring", 00:11:17.338 "raid_level": "raid1", 00:11:17.338 "superblock": false, 00:11:17.338 "num_base_bdevs": 3, 00:11:17.338 "num_base_bdevs_discovered": 2, 00:11:17.338 "num_base_bdevs_operational": 3, 00:11:17.338 "base_bdevs_list": [ 00:11:17.338 { 00:11:17.338 "name": "BaseBdev1", 00:11:17.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.338 "is_configured": false, 00:11:17.338 "data_offset": 0, 00:11:17.338 "data_size": 0 00:11:17.338 }, 00:11:17.338 { 00:11:17.338 "name": "BaseBdev2", 00:11:17.338 "uuid": "5d4c92e3-7e74-484f-82a9-8617b3da1b9c", 00:11:17.338 "is_configured": true, 00:11:17.338 "data_offset": 0, 00:11:17.338 "data_size": 65536 00:11:17.338 }, 00:11:17.338 { 00:11:17.338 "name": "BaseBdev3", 00:11:17.338 "uuid": "d0b23a10-4176-40f2-b177-2d846660c5a4", 00:11:17.338 "is_configured": true, 00:11:17.338 "data_offset": 0, 00:11:17.338 "data_size": 65536 00:11:17.338 } 00:11:17.338 ] 00:11:17.338 }' 00:11:17.338 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.338 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.597 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:17.597 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.597 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.597 [2024-12-06 04:02:10.899289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:17.597 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.597 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:17.597 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.597 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.597 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.597 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.597 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.597 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.597 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.597 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.597 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.597 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.597 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.597 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.597 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.597 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.855 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.855 "name": "Existed_Raid", 00:11:17.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.855 "strip_size_kb": 0, 00:11:17.855 "state": "configuring", 00:11:17.855 "raid_level": "raid1", 00:11:17.855 "superblock": false, 00:11:17.855 "num_base_bdevs": 3, 00:11:17.855 "num_base_bdevs_discovered": 1, 00:11:17.855 "num_base_bdevs_operational": 3, 00:11:17.855 "base_bdevs_list": [ 00:11:17.855 { 00:11:17.855 "name": "BaseBdev1", 00:11:17.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.855 "is_configured": false, 00:11:17.855 "data_offset": 0, 00:11:17.855 "data_size": 0 00:11:17.855 }, 00:11:17.855 { 00:11:17.855 "name": null, 00:11:17.855 "uuid": "5d4c92e3-7e74-484f-82a9-8617b3da1b9c", 00:11:17.855 "is_configured": false, 00:11:17.855 "data_offset": 0, 00:11:17.855 "data_size": 65536 00:11:17.855 }, 00:11:17.855 { 00:11:17.855 "name": "BaseBdev3", 00:11:17.855 "uuid": "d0b23a10-4176-40f2-b177-2d846660c5a4", 00:11:17.855 "is_configured": true, 00:11:17.855 "data_offset": 0, 00:11:17.855 "data_size": 65536 00:11:17.855 } 00:11:17.855 ] 00:11:17.855 }' 00:11:17.855 04:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.855 04:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.113 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:18.114 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.114 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.114 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.114 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.114 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:18.114 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:18.114 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.114 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.114 [2024-12-06 04:02:11.427560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:18.114 BaseBdev1 00:11:18.114 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.114 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:18.114 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:18.114 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.114 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:18.114 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.114 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.114 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.114 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.114 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.114 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.114 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:18.114 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.114 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.114 [ 00:11:18.114 { 00:11:18.114 "name": "BaseBdev1", 00:11:18.114 "aliases": [ 00:11:18.114 "97ae7dae-028d-4c38-95c1-5531623b07d2" 00:11:18.114 ], 00:11:18.114 "product_name": "Malloc disk", 00:11:18.114 "block_size": 512, 00:11:18.114 "num_blocks": 65536, 00:11:18.114 "uuid": "97ae7dae-028d-4c38-95c1-5531623b07d2", 00:11:18.114 "assigned_rate_limits": { 00:11:18.114 "rw_ios_per_sec": 0, 00:11:18.114 "rw_mbytes_per_sec": 0, 00:11:18.114 "r_mbytes_per_sec": 0, 00:11:18.114 "w_mbytes_per_sec": 0 00:11:18.114 }, 00:11:18.114 "claimed": true, 00:11:18.114 "claim_type": "exclusive_write", 00:11:18.114 "zoned": false, 00:11:18.114 "supported_io_types": { 00:11:18.114 "read": true, 00:11:18.114 "write": true, 00:11:18.114 "unmap": true, 00:11:18.114 "flush": true, 00:11:18.114 "reset": true, 00:11:18.114 "nvme_admin": false, 00:11:18.114 "nvme_io": false, 00:11:18.114 "nvme_io_md": false, 00:11:18.114 "write_zeroes": true, 00:11:18.114 "zcopy": true, 00:11:18.114 "get_zone_info": false, 00:11:18.114 "zone_management": false, 00:11:18.114 "zone_append": false, 00:11:18.114 "compare": false, 00:11:18.114 "compare_and_write": false, 00:11:18.114 "abort": true, 00:11:18.114 "seek_hole": false, 00:11:18.114 "seek_data": false, 00:11:18.114 "copy": true, 00:11:18.114 "nvme_iov_md": false 00:11:18.114 }, 00:11:18.114 "memory_domains": [ 00:11:18.114 { 00:11:18.114 "dma_device_id": "system", 00:11:18.114 "dma_device_type": 1 00:11:18.114 }, 00:11:18.114 { 00:11:18.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.373 "dma_device_type": 2 00:11:18.373 } 00:11:18.373 ], 00:11:18.373 "driver_specific": {} 00:11:18.373 } 00:11:18.373 ] 00:11:18.373 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.373 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:18.373 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:18.373 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.373 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.373 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.373 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.373 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.373 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.373 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.373 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.373 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.373 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.373 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.373 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.373 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.373 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.373 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.373 "name": "Existed_Raid", 00:11:18.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.373 "strip_size_kb": 0, 00:11:18.373 "state": "configuring", 00:11:18.373 "raid_level": "raid1", 00:11:18.373 "superblock": false, 00:11:18.373 "num_base_bdevs": 3, 00:11:18.373 "num_base_bdevs_discovered": 2, 00:11:18.373 "num_base_bdevs_operational": 3, 00:11:18.373 "base_bdevs_list": [ 00:11:18.373 { 00:11:18.373 "name": "BaseBdev1", 00:11:18.373 "uuid": "97ae7dae-028d-4c38-95c1-5531623b07d2", 00:11:18.373 "is_configured": true, 00:11:18.373 "data_offset": 0, 00:11:18.373 "data_size": 65536 00:11:18.373 }, 00:11:18.373 { 00:11:18.373 "name": null, 00:11:18.373 "uuid": "5d4c92e3-7e74-484f-82a9-8617b3da1b9c", 00:11:18.373 "is_configured": false, 00:11:18.373 "data_offset": 0, 00:11:18.373 "data_size": 65536 00:11:18.373 }, 00:11:18.373 { 00:11:18.373 "name": "BaseBdev3", 00:11:18.373 "uuid": "d0b23a10-4176-40f2-b177-2d846660c5a4", 00:11:18.373 "is_configured": true, 00:11:18.373 "data_offset": 0, 00:11:18.373 "data_size": 65536 00:11:18.374 } 00:11:18.374 ] 00:11:18.374 }' 00:11:18.374 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.374 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.633 [2024-12-06 04:02:11.906842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.633 "name": "Existed_Raid", 00:11:18.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.633 "strip_size_kb": 0, 00:11:18.633 "state": "configuring", 00:11:18.633 "raid_level": "raid1", 00:11:18.633 "superblock": false, 00:11:18.633 "num_base_bdevs": 3, 00:11:18.633 "num_base_bdevs_discovered": 1, 00:11:18.633 "num_base_bdevs_operational": 3, 00:11:18.633 "base_bdevs_list": [ 00:11:18.633 { 00:11:18.633 "name": "BaseBdev1", 00:11:18.633 "uuid": "97ae7dae-028d-4c38-95c1-5531623b07d2", 00:11:18.633 "is_configured": true, 00:11:18.633 "data_offset": 0, 00:11:18.633 "data_size": 65536 00:11:18.633 }, 00:11:18.633 { 00:11:18.633 "name": null, 00:11:18.633 "uuid": "5d4c92e3-7e74-484f-82a9-8617b3da1b9c", 00:11:18.633 "is_configured": false, 00:11:18.633 "data_offset": 0, 00:11:18.633 "data_size": 65536 00:11:18.633 }, 00:11:18.633 { 00:11:18.633 "name": null, 00:11:18.633 "uuid": "d0b23a10-4176-40f2-b177-2d846660c5a4", 00:11:18.633 "is_configured": false, 00:11:18.633 "data_offset": 0, 00:11:18.633 "data_size": 65536 00:11:18.633 } 00:11:18.633 ] 00:11:18.633 }' 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.633 04:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.200 [2024-12-06 04:02:12.330194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.200 "name": "Existed_Raid", 00:11:19.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.200 "strip_size_kb": 0, 00:11:19.200 "state": "configuring", 00:11:19.200 "raid_level": "raid1", 00:11:19.200 "superblock": false, 00:11:19.200 "num_base_bdevs": 3, 00:11:19.200 "num_base_bdevs_discovered": 2, 00:11:19.200 "num_base_bdevs_operational": 3, 00:11:19.200 "base_bdevs_list": [ 00:11:19.200 { 00:11:19.200 "name": "BaseBdev1", 00:11:19.200 "uuid": "97ae7dae-028d-4c38-95c1-5531623b07d2", 00:11:19.200 "is_configured": true, 00:11:19.200 "data_offset": 0, 00:11:19.200 "data_size": 65536 00:11:19.200 }, 00:11:19.200 { 00:11:19.200 "name": null, 00:11:19.200 "uuid": "5d4c92e3-7e74-484f-82a9-8617b3da1b9c", 00:11:19.200 "is_configured": false, 00:11:19.200 "data_offset": 0, 00:11:19.200 "data_size": 65536 00:11:19.200 }, 00:11:19.200 { 00:11:19.200 "name": "BaseBdev3", 00:11:19.200 "uuid": "d0b23a10-4176-40f2-b177-2d846660c5a4", 00:11:19.200 "is_configured": true, 00:11:19.200 "data_offset": 0, 00:11:19.200 "data_size": 65536 00:11:19.200 } 00:11:19.200 ] 00:11:19.200 }' 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.200 04:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.472 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.472 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:19.472 04:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.472 04:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.472 04:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.731 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:19.732 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:19.732 04:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.732 04:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.732 [2024-12-06 04:02:12.853313] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:19.732 04:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.732 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:19.732 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.732 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.732 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.732 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.732 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.732 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.732 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.732 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.732 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.732 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.732 04:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.732 04:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.732 04:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.732 04:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.732 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.732 "name": "Existed_Raid", 00:11:19.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.732 "strip_size_kb": 0, 00:11:19.732 "state": "configuring", 00:11:19.732 "raid_level": "raid1", 00:11:19.732 "superblock": false, 00:11:19.732 "num_base_bdevs": 3, 00:11:19.732 "num_base_bdevs_discovered": 1, 00:11:19.732 "num_base_bdevs_operational": 3, 00:11:19.732 "base_bdevs_list": [ 00:11:19.732 { 00:11:19.732 "name": null, 00:11:19.732 "uuid": "97ae7dae-028d-4c38-95c1-5531623b07d2", 00:11:19.732 "is_configured": false, 00:11:19.732 "data_offset": 0, 00:11:19.732 "data_size": 65536 00:11:19.732 }, 00:11:19.732 { 00:11:19.732 "name": null, 00:11:19.732 "uuid": "5d4c92e3-7e74-484f-82a9-8617b3da1b9c", 00:11:19.732 "is_configured": false, 00:11:19.732 "data_offset": 0, 00:11:19.732 "data_size": 65536 00:11:19.732 }, 00:11:19.732 { 00:11:19.732 "name": "BaseBdev3", 00:11:19.732 "uuid": "d0b23a10-4176-40f2-b177-2d846660c5a4", 00:11:19.732 "is_configured": true, 00:11:19.732 "data_offset": 0, 00:11:19.732 "data_size": 65536 00:11:19.732 } 00:11:19.732 ] 00:11:19.732 }' 00:11:19.732 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.732 04:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.299 [2024-12-06 04:02:13.496178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.299 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.299 "name": "Existed_Raid", 00:11:20.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.299 "strip_size_kb": 0, 00:11:20.299 "state": "configuring", 00:11:20.299 "raid_level": "raid1", 00:11:20.299 "superblock": false, 00:11:20.299 "num_base_bdevs": 3, 00:11:20.299 "num_base_bdevs_discovered": 2, 00:11:20.299 "num_base_bdevs_operational": 3, 00:11:20.299 "base_bdevs_list": [ 00:11:20.299 { 00:11:20.299 "name": null, 00:11:20.299 "uuid": "97ae7dae-028d-4c38-95c1-5531623b07d2", 00:11:20.299 "is_configured": false, 00:11:20.299 "data_offset": 0, 00:11:20.299 "data_size": 65536 00:11:20.299 }, 00:11:20.299 { 00:11:20.299 "name": "BaseBdev2", 00:11:20.299 "uuid": "5d4c92e3-7e74-484f-82a9-8617b3da1b9c", 00:11:20.299 "is_configured": true, 00:11:20.300 "data_offset": 0, 00:11:20.300 "data_size": 65536 00:11:20.300 }, 00:11:20.300 { 00:11:20.300 "name": "BaseBdev3", 00:11:20.300 "uuid": "d0b23a10-4176-40f2-b177-2d846660c5a4", 00:11:20.300 "is_configured": true, 00:11:20.300 "data_offset": 0, 00:11:20.300 "data_size": 65536 00:11:20.300 } 00:11:20.300 ] 00:11:20.300 }' 00:11:20.300 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.300 04:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.869 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.869 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:20.869 04:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.869 04:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.869 04:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.869 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:20.869 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.869 04:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.869 04:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:20.869 04:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.869 04:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 97ae7dae-028d-4c38-95c1-5531623b07d2 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.869 [2024-12-06 04:02:14.046921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:20.869 [2024-12-06 04:02:14.047110] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:20.869 [2024-12-06 04:02:14.047141] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:20.869 [2024-12-06 04:02:14.047464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:20.869 [2024-12-06 04:02:14.047674] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:20.869 [2024-12-06 04:02:14.047720] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:20.869 [2024-12-06 04:02:14.048081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.869 NewBaseBdev 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.869 [ 00:11:20.869 { 00:11:20.869 "name": "NewBaseBdev", 00:11:20.869 "aliases": [ 00:11:20.869 "97ae7dae-028d-4c38-95c1-5531623b07d2" 00:11:20.869 ], 00:11:20.869 "product_name": "Malloc disk", 00:11:20.869 "block_size": 512, 00:11:20.869 "num_blocks": 65536, 00:11:20.869 "uuid": "97ae7dae-028d-4c38-95c1-5531623b07d2", 00:11:20.869 "assigned_rate_limits": { 00:11:20.869 "rw_ios_per_sec": 0, 00:11:20.869 "rw_mbytes_per_sec": 0, 00:11:20.869 "r_mbytes_per_sec": 0, 00:11:20.869 "w_mbytes_per_sec": 0 00:11:20.869 }, 00:11:20.869 "claimed": true, 00:11:20.869 "claim_type": "exclusive_write", 00:11:20.869 "zoned": false, 00:11:20.869 "supported_io_types": { 00:11:20.869 "read": true, 00:11:20.869 "write": true, 00:11:20.869 "unmap": true, 00:11:20.869 "flush": true, 00:11:20.869 "reset": true, 00:11:20.869 "nvme_admin": false, 00:11:20.869 "nvme_io": false, 00:11:20.869 "nvme_io_md": false, 00:11:20.869 "write_zeroes": true, 00:11:20.869 "zcopy": true, 00:11:20.869 "get_zone_info": false, 00:11:20.869 "zone_management": false, 00:11:20.869 "zone_append": false, 00:11:20.869 "compare": false, 00:11:20.869 "compare_and_write": false, 00:11:20.869 "abort": true, 00:11:20.869 "seek_hole": false, 00:11:20.869 "seek_data": false, 00:11:20.869 "copy": true, 00:11:20.869 "nvme_iov_md": false 00:11:20.869 }, 00:11:20.869 "memory_domains": [ 00:11:20.869 { 00:11:20.869 "dma_device_id": "system", 00:11:20.869 "dma_device_type": 1 00:11:20.869 }, 00:11:20.869 { 00:11:20.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.869 "dma_device_type": 2 00:11:20.869 } 00:11:20.869 ], 00:11:20.869 "driver_specific": {} 00:11:20.869 } 00:11:20.869 ] 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.869 "name": "Existed_Raid", 00:11:20.869 "uuid": "84743c44-8512-4370-bb94-a268b35cda00", 00:11:20.869 "strip_size_kb": 0, 00:11:20.869 "state": "online", 00:11:20.869 "raid_level": "raid1", 00:11:20.869 "superblock": false, 00:11:20.869 "num_base_bdevs": 3, 00:11:20.869 "num_base_bdevs_discovered": 3, 00:11:20.869 "num_base_bdevs_operational": 3, 00:11:20.869 "base_bdevs_list": [ 00:11:20.869 { 00:11:20.869 "name": "NewBaseBdev", 00:11:20.869 "uuid": "97ae7dae-028d-4c38-95c1-5531623b07d2", 00:11:20.869 "is_configured": true, 00:11:20.869 "data_offset": 0, 00:11:20.869 "data_size": 65536 00:11:20.869 }, 00:11:20.869 { 00:11:20.869 "name": "BaseBdev2", 00:11:20.869 "uuid": "5d4c92e3-7e74-484f-82a9-8617b3da1b9c", 00:11:20.869 "is_configured": true, 00:11:20.869 "data_offset": 0, 00:11:20.869 "data_size": 65536 00:11:20.869 }, 00:11:20.869 { 00:11:20.869 "name": "BaseBdev3", 00:11:20.869 "uuid": "d0b23a10-4176-40f2-b177-2d846660c5a4", 00:11:20.869 "is_configured": true, 00:11:20.869 "data_offset": 0, 00:11:20.869 "data_size": 65536 00:11:20.869 } 00:11:20.869 ] 00:11:20.869 }' 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.869 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.441 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:21.441 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:21.441 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:21.441 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:21.441 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:21.441 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:21.441 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:21.441 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:21.441 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.441 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.441 [2024-12-06 04:02:14.514482] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.441 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.441 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:21.441 "name": "Existed_Raid", 00:11:21.441 "aliases": [ 00:11:21.441 "84743c44-8512-4370-bb94-a268b35cda00" 00:11:21.441 ], 00:11:21.441 "product_name": "Raid Volume", 00:11:21.441 "block_size": 512, 00:11:21.441 "num_blocks": 65536, 00:11:21.441 "uuid": "84743c44-8512-4370-bb94-a268b35cda00", 00:11:21.441 "assigned_rate_limits": { 00:11:21.441 "rw_ios_per_sec": 0, 00:11:21.441 "rw_mbytes_per_sec": 0, 00:11:21.441 "r_mbytes_per_sec": 0, 00:11:21.441 "w_mbytes_per_sec": 0 00:11:21.441 }, 00:11:21.441 "claimed": false, 00:11:21.441 "zoned": false, 00:11:21.441 "supported_io_types": { 00:11:21.441 "read": true, 00:11:21.441 "write": true, 00:11:21.441 "unmap": false, 00:11:21.441 "flush": false, 00:11:21.441 "reset": true, 00:11:21.441 "nvme_admin": false, 00:11:21.441 "nvme_io": false, 00:11:21.441 "nvme_io_md": false, 00:11:21.441 "write_zeroes": true, 00:11:21.441 "zcopy": false, 00:11:21.441 "get_zone_info": false, 00:11:21.441 "zone_management": false, 00:11:21.441 "zone_append": false, 00:11:21.441 "compare": false, 00:11:21.441 "compare_and_write": false, 00:11:21.441 "abort": false, 00:11:21.441 "seek_hole": false, 00:11:21.441 "seek_data": false, 00:11:21.441 "copy": false, 00:11:21.441 "nvme_iov_md": false 00:11:21.441 }, 00:11:21.441 "memory_domains": [ 00:11:21.441 { 00:11:21.441 "dma_device_id": "system", 00:11:21.441 "dma_device_type": 1 00:11:21.441 }, 00:11:21.441 { 00:11:21.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.441 "dma_device_type": 2 00:11:21.441 }, 00:11:21.441 { 00:11:21.441 "dma_device_id": "system", 00:11:21.441 "dma_device_type": 1 00:11:21.441 }, 00:11:21.441 { 00:11:21.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.441 "dma_device_type": 2 00:11:21.441 }, 00:11:21.441 { 00:11:21.441 "dma_device_id": "system", 00:11:21.441 "dma_device_type": 1 00:11:21.441 }, 00:11:21.441 { 00:11:21.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.441 "dma_device_type": 2 00:11:21.441 } 00:11:21.441 ], 00:11:21.441 "driver_specific": { 00:11:21.441 "raid": { 00:11:21.441 "uuid": "84743c44-8512-4370-bb94-a268b35cda00", 00:11:21.441 "strip_size_kb": 0, 00:11:21.441 "state": "online", 00:11:21.441 "raid_level": "raid1", 00:11:21.441 "superblock": false, 00:11:21.441 "num_base_bdevs": 3, 00:11:21.441 "num_base_bdevs_discovered": 3, 00:11:21.441 "num_base_bdevs_operational": 3, 00:11:21.441 "base_bdevs_list": [ 00:11:21.441 { 00:11:21.441 "name": "NewBaseBdev", 00:11:21.441 "uuid": "97ae7dae-028d-4c38-95c1-5531623b07d2", 00:11:21.441 "is_configured": true, 00:11:21.441 "data_offset": 0, 00:11:21.441 "data_size": 65536 00:11:21.441 }, 00:11:21.441 { 00:11:21.441 "name": "BaseBdev2", 00:11:21.441 "uuid": "5d4c92e3-7e74-484f-82a9-8617b3da1b9c", 00:11:21.441 "is_configured": true, 00:11:21.441 "data_offset": 0, 00:11:21.441 "data_size": 65536 00:11:21.441 }, 00:11:21.441 { 00:11:21.441 "name": "BaseBdev3", 00:11:21.441 "uuid": "d0b23a10-4176-40f2-b177-2d846660c5a4", 00:11:21.441 "is_configured": true, 00:11:21.441 "data_offset": 0, 00:11:21.441 "data_size": 65536 00:11:21.441 } 00:11:21.441 ] 00:11:21.441 } 00:11:21.441 } 00:11:21.441 }' 00:11:21.441 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:21.441 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:21.441 BaseBdev2 00:11:21.441 BaseBdev3' 00:11:21.441 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.441 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:21.441 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.441 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:21.441 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.441 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.441 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.442 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.442 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.442 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.442 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.442 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.442 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:21.442 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.442 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.442 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.442 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.442 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.442 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.442 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:21.442 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.442 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.442 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.442 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.702 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.702 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.702 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:21.702 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.702 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.702 [2024-12-06 04:02:14.809698] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:21.702 [2024-12-06 04:02:14.809797] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:21.703 [2024-12-06 04:02:14.809910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.703 [2024-12-06 04:02:14.810235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:21.703 [2024-12-06 04:02:14.810249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:21.703 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.703 04:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67486 00:11:21.703 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67486 ']' 00:11:21.703 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67486 00:11:21.703 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:21.703 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.703 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67486 00:11:21.703 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:21.703 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:21.703 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67486' 00:11:21.703 killing process with pid 67486 00:11:21.703 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67486 00:11:21.703 [2024-12-06 04:02:14.868226] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:21.703 04:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67486 00:11:21.962 [2024-12-06 04:02:15.164078] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:23.336 00:11:23.336 real 0m10.631s 00:11:23.336 user 0m16.899s 00:11:23.336 sys 0m1.836s 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.336 ************************************ 00:11:23.336 END TEST raid_state_function_test 00:11:23.336 ************************************ 00:11:23.336 04:02:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:11:23.336 04:02:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:23.336 04:02:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.336 04:02:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.336 ************************************ 00:11:23.336 START TEST raid_state_function_test_sb 00:11:23.336 ************************************ 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:23.336 Process raid pid: 68113 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68113 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68113' 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68113 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68113 ']' 00:11:23.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.336 04:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.336 [2024-12-06 04:02:16.429278] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:11:23.336 [2024-12-06 04:02:16.429481] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.336 [2024-12-06 04:02:16.602568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.595 [2024-12-06 04:02:16.723056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.595 [2024-12-06 04:02:16.929498] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.595 [2024-12-06 04:02:16.929637] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.163 [2024-12-06 04:02:17.268732] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:24.163 [2024-12-06 04:02:17.268791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:24.163 [2024-12-06 04:02:17.268807] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:24.163 [2024-12-06 04:02:17.268817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:24.163 [2024-12-06 04:02:17.268823] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:24.163 [2024-12-06 04:02:17.268832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.163 "name": "Existed_Raid", 00:11:24.163 "uuid": "258e6bc8-01eb-4f43-bb02-ec628469c06c", 00:11:24.163 "strip_size_kb": 0, 00:11:24.163 "state": "configuring", 00:11:24.163 "raid_level": "raid1", 00:11:24.163 "superblock": true, 00:11:24.163 "num_base_bdevs": 3, 00:11:24.163 "num_base_bdevs_discovered": 0, 00:11:24.163 "num_base_bdevs_operational": 3, 00:11:24.163 "base_bdevs_list": [ 00:11:24.163 { 00:11:24.163 "name": "BaseBdev1", 00:11:24.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.163 "is_configured": false, 00:11:24.163 "data_offset": 0, 00:11:24.163 "data_size": 0 00:11:24.163 }, 00:11:24.163 { 00:11:24.163 "name": "BaseBdev2", 00:11:24.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.163 "is_configured": false, 00:11:24.163 "data_offset": 0, 00:11:24.163 "data_size": 0 00:11:24.163 }, 00:11:24.163 { 00:11:24.163 "name": "BaseBdev3", 00:11:24.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.163 "is_configured": false, 00:11:24.163 "data_offset": 0, 00:11:24.163 "data_size": 0 00:11:24.163 } 00:11:24.163 ] 00:11:24.163 }' 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.163 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.423 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:24.423 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.423 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.423 [2024-12-06 04:02:17.668014] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:24.423 [2024-12-06 04:02:17.668111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:24.423 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.423 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:24.423 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.424 [2024-12-06 04:02:17.680004] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:24.424 [2024-12-06 04:02:17.680095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:24.424 [2024-12-06 04:02:17.680108] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:24.424 [2024-12-06 04:02:17.680118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:24.424 [2024-12-06 04:02:17.680125] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:24.424 [2024-12-06 04:02:17.680133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.424 [2024-12-06 04:02:17.731327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.424 BaseBdev1 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.424 [ 00:11:24.424 { 00:11:24.424 "name": "BaseBdev1", 00:11:24.424 "aliases": [ 00:11:24.424 "082c757a-f6d1-401d-89e8-8eb8059a8a87" 00:11:24.424 ], 00:11:24.424 "product_name": "Malloc disk", 00:11:24.424 "block_size": 512, 00:11:24.424 "num_blocks": 65536, 00:11:24.424 "uuid": "082c757a-f6d1-401d-89e8-8eb8059a8a87", 00:11:24.424 "assigned_rate_limits": { 00:11:24.424 "rw_ios_per_sec": 0, 00:11:24.424 "rw_mbytes_per_sec": 0, 00:11:24.424 "r_mbytes_per_sec": 0, 00:11:24.424 "w_mbytes_per_sec": 0 00:11:24.424 }, 00:11:24.424 "claimed": true, 00:11:24.424 "claim_type": "exclusive_write", 00:11:24.424 "zoned": false, 00:11:24.424 "supported_io_types": { 00:11:24.424 "read": true, 00:11:24.424 "write": true, 00:11:24.424 "unmap": true, 00:11:24.424 "flush": true, 00:11:24.424 "reset": true, 00:11:24.424 "nvme_admin": false, 00:11:24.424 "nvme_io": false, 00:11:24.424 "nvme_io_md": false, 00:11:24.424 "write_zeroes": true, 00:11:24.424 "zcopy": true, 00:11:24.424 "get_zone_info": false, 00:11:24.424 "zone_management": false, 00:11:24.424 "zone_append": false, 00:11:24.424 "compare": false, 00:11:24.424 "compare_and_write": false, 00:11:24.424 "abort": true, 00:11:24.424 "seek_hole": false, 00:11:24.424 "seek_data": false, 00:11:24.424 "copy": true, 00:11:24.424 "nvme_iov_md": false 00:11:24.424 }, 00:11:24.424 "memory_domains": [ 00:11:24.424 { 00:11:24.424 "dma_device_id": "system", 00:11:24.424 "dma_device_type": 1 00:11:24.424 }, 00:11:24.424 { 00:11:24.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.424 "dma_device_type": 2 00:11:24.424 } 00:11:24.424 ], 00:11:24.424 "driver_specific": {} 00:11:24.424 } 00:11:24.424 ] 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.424 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.684 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.684 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.684 "name": "Existed_Raid", 00:11:24.684 "uuid": "f054f7c8-ca82-4de3-af04-877c703be226", 00:11:24.684 "strip_size_kb": 0, 00:11:24.684 "state": "configuring", 00:11:24.684 "raid_level": "raid1", 00:11:24.684 "superblock": true, 00:11:24.684 "num_base_bdevs": 3, 00:11:24.684 "num_base_bdevs_discovered": 1, 00:11:24.684 "num_base_bdevs_operational": 3, 00:11:24.684 "base_bdevs_list": [ 00:11:24.684 { 00:11:24.684 "name": "BaseBdev1", 00:11:24.684 "uuid": "082c757a-f6d1-401d-89e8-8eb8059a8a87", 00:11:24.684 "is_configured": true, 00:11:24.684 "data_offset": 2048, 00:11:24.684 "data_size": 63488 00:11:24.684 }, 00:11:24.684 { 00:11:24.684 "name": "BaseBdev2", 00:11:24.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.684 "is_configured": false, 00:11:24.684 "data_offset": 0, 00:11:24.684 "data_size": 0 00:11:24.684 }, 00:11:24.684 { 00:11:24.684 "name": "BaseBdev3", 00:11:24.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.684 "is_configured": false, 00:11:24.684 "data_offset": 0, 00:11:24.684 "data_size": 0 00:11:24.684 } 00:11:24.684 ] 00:11:24.684 }' 00:11:24.684 04:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.684 04:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.943 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:24.943 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.943 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.943 [2024-12-06 04:02:18.214544] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:24.943 [2024-12-06 04:02:18.214646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:24.943 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.943 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:24.943 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.943 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.943 [2024-12-06 04:02:18.222568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.943 [2024-12-06 04:02:18.224382] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:24.943 [2024-12-06 04:02:18.224466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:24.943 [2024-12-06 04:02:18.224480] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:24.943 [2024-12-06 04:02:18.224490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:24.943 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.943 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:24.943 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.943 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:24.943 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.943 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.943 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.944 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.944 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.944 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.944 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.944 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.944 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.944 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.944 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.944 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.944 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.944 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.944 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.944 "name": "Existed_Raid", 00:11:24.944 "uuid": "c617756e-5d55-4e1c-93a7-f6d8d8fd4b4b", 00:11:24.944 "strip_size_kb": 0, 00:11:24.944 "state": "configuring", 00:11:24.944 "raid_level": "raid1", 00:11:24.944 "superblock": true, 00:11:24.944 "num_base_bdevs": 3, 00:11:24.944 "num_base_bdevs_discovered": 1, 00:11:24.944 "num_base_bdevs_operational": 3, 00:11:24.944 "base_bdevs_list": [ 00:11:24.944 { 00:11:24.944 "name": "BaseBdev1", 00:11:24.944 "uuid": "082c757a-f6d1-401d-89e8-8eb8059a8a87", 00:11:24.944 "is_configured": true, 00:11:24.944 "data_offset": 2048, 00:11:24.944 "data_size": 63488 00:11:24.944 }, 00:11:24.944 { 00:11:24.944 "name": "BaseBdev2", 00:11:24.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.944 "is_configured": false, 00:11:24.944 "data_offset": 0, 00:11:24.944 "data_size": 0 00:11:24.944 }, 00:11:24.944 { 00:11:24.944 "name": "BaseBdev3", 00:11:24.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.944 "is_configured": false, 00:11:24.944 "data_offset": 0, 00:11:24.944 "data_size": 0 00:11:24.944 } 00:11:24.944 ] 00:11:24.944 }' 00:11:24.944 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.944 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.515 [2024-12-06 04:02:18.676860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.515 BaseBdev2 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.515 [ 00:11:25.515 { 00:11:25.515 "name": "BaseBdev2", 00:11:25.515 "aliases": [ 00:11:25.515 "970f8e24-8a4f-400b-a68a-fe4fd0da65e7" 00:11:25.515 ], 00:11:25.515 "product_name": "Malloc disk", 00:11:25.515 "block_size": 512, 00:11:25.515 "num_blocks": 65536, 00:11:25.515 "uuid": "970f8e24-8a4f-400b-a68a-fe4fd0da65e7", 00:11:25.515 "assigned_rate_limits": { 00:11:25.515 "rw_ios_per_sec": 0, 00:11:25.515 "rw_mbytes_per_sec": 0, 00:11:25.515 "r_mbytes_per_sec": 0, 00:11:25.515 "w_mbytes_per_sec": 0 00:11:25.515 }, 00:11:25.515 "claimed": true, 00:11:25.515 "claim_type": "exclusive_write", 00:11:25.515 "zoned": false, 00:11:25.515 "supported_io_types": { 00:11:25.515 "read": true, 00:11:25.515 "write": true, 00:11:25.515 "unmap": true, 00:11:25.515 "flush": true, 00:11:25.515 "reset": true, 00:11:25.515 "nvme_admin": false, 00:11:25.515 "nvme_io": false, 00:11:25.515 "nvme_io_md": false, 00:11:25.515 "write_zeroes": true, 00:11:25.515 "zcopy": true, 00:11:25.515 "get_zone_info": false, 00:11:25.515 "zone_management": false, 00:11:25.515 "zone_append": false, 00:11:25.515 "compare": false, 00:11:25.515 "compare_and_write": false, 00:11:25.515 "abort": true, 00:11:25.515 "seek_hole": false, 00:11:25.515 "seek_data": false, 00:11:25.515 "copy": true, 00:11:25.515 "nvme_iov_md": false 00:11:25.515 }, 00:11:25.515 "memory_domains": [ 00:11:25.515 { 00:11:25.515 "dma_device_id": "system", 00:11:25.515 "dma_device_type": 1 00:11:25.515 }, 00:11:25.515 { 00:11:25.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.515 "dma_device_type": 2 00:11:25.515 } 00:11:25.515 ], 00:11:25.515 "driver_specific": {} 00:11:25.515 } 00:11:25.515 ] 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.515 "name": "Existed_Raid", 00:11:25.515 "uuid": "c617756e-5d55-4e1c-93a7-f6d8d8fd4b4b", 00:11:25.515 "strip_size_kb": 0, 00:11:25.515 "state": "configuring", 00:11:25.515 "raid_level": "raid1", 00:11:25.515 "superblock": true, 00:11:25.515 "num_base_bdevs": 3, 00:11:25.515 "num_base_bdevs_discovered": 2, 00:11:25.515 "num_base_bdevs_operational": 3, 00:11:25.515 "base_bdevs_list": [ 00:11:25.515 { 00:11:25.515 "name": "BaseBdev1", 00:11:25.515 "uuid": "082c757a-f6d1-401d-89e8-8eb8059a8a87", 00:11:25.515 "is_configured": true, 00:11:25.515 "data_offset": 2048, 00:11:25.515 "data_size": 63488 00:11:25.515 }, 00:11:25.515 { 00:11:25.515 "name": "BaseBdev2", 00:11:25.515 "uuid": "970f8e24-8a4f-400b-a68a-fe4fd0da65e7", 00:11:25.515 "is_configured": true, 00:11:25.515 "data_offset": 2048, 00:11:25.515 "data_size": 63488 00:11:25.515 }, 00:11:25.515 { 00:11:25.515 "name": "BaseBdev3", 00:11:25.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.515 "is_configured": false, 00:11:25.515 "data_offset": 0, 00:11:25.515 "data_size": 0 00:11:25.515 } 00:11:25.515 ] 00:11:25.515 }' 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.515 04:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.146 [2024-12-06 04:02:19.206545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.146 [2024-12-06 04:02:19.206803] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:26.146 [2024-12-06 04:02:19.206824] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:26.146 [2024-12-06 04:02:19.207341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:26.146 BaseBdev3 00:11:26.146 [2024-12-06 04:02:19.207557] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:26.146 [2024-12-06 04:02:19.207569] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:26.146 [2024-12-06 04:02:19.207737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.146 [ 00:11:26.146 { 00:11:26.146 "name": "BaseBdev3", 00:11:26.146 "aliases": [ 00:11:26.146 "dfb6670c-9a83-40f3-b755-95387f8b3244" 00:11:26.146 ], 00:11:26.146 "product_name": "Malloc disk", 00:11:26.146 "block_size": 512, 00:11:26.146 "num_blocks": 65536, 00:11:26.146 "uuid": "dfb6670c-9a83-40f3-b755-95387f8b3244", 00:11:26.146 "assigned_rate_limits": { 00:11:26.146 "rw_ios_per_sec": 0, 00:11:26.146 "rw_mbytes_per_sec": 0, 00:11:26.146 "r_mbytes_per_sec": 0, 00:11:26.146 "w_mbytes_per_sec": 0 00:11:26.146 }, 00:11:26.146 "claimed": true, 00:11:26.146 "claim_type": "exclusive_write", 00:11:26.146 "zoned": false, 00:11:26.146 "supported_io_types": { 00:11:26.146 "read": true, 00:11:26.146 "write": true, 00:11:26.146 "unmap": true, 00:11:26.146 "flush": true, 00:11:26.146 "reset": true, 00:11:26.146 "nvme_admin": false, 00:11:26.146 "nvme_io": false, 00:11:26.146 "nvme_io_md": false, 00:11:26.146 "write_zeroes": true, 00:11:26.146 "zcopy": true, 00:11:26.146 "get_zone_info": false, 00:11:26.146 "zone_management": false, 00:11:26.146 "zone_append": false, 00:11:26.146 "compare": false, 00:11:26.146 "compare_and_write": false, 00:11:26.146 "abort": true, 00:11:26.146 "seek_hole": false, 00:11:26.146 "seek_data": false, 00:11:26.146 "copy": true, 00:11:26.146 "nvme_iov_md": false 00:11:26.146 }, 00:11:26.146 "memory_domains": [ 00:11:26.146 { 00:11:26.146 "dma_device_id": "system", 00:11:26.146 "dma_device_type": 1 00:11:26.146 }, 00:11:26.146 { 00:11:26.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.146 "dma_device_type": 2 00:11:26.146 } 00:11:26.146 ], 00:11:26.146 "driver_specific": {} 00:11:26.146 } 00:11:26.146 ] 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.146 "name": "Existed_Raid", 00:11:26.146 "uuid": "c617756e-5d55-4e1c-93a7-f6d8d8fd4b4b", 00:11:26.146 "strip_size_kb": 0, 00:11:26.146 "state": "online", 00:11:26.146 "raid_level": "raid1", 00:11:26.146 "superblock": true, 00:11:26.146 "num_base_bdevs": 3, 00:11:26.146 "num_base_bdevs_discovered": 3, 00:11:26.146 "num_base_bdevs_operational": 3, 00:11:26.146 "base_bdevs_list": [ 00:11:26.146 { 00:11:26.146 "name": "BaseBdev1", 00:11:26.146 "uuid": "082c757a-f6d1-401d-89e8-8eb8059a8a87", 00:11:26.146 "is_configured": true, 00:11:26.146 "data_offset": 2048, 00:11:26.146 "data_size": 63488 00:11:26.146 }, 00:11:26.146 { 00:11:26.146 "name": "BaseBdev2", 00:11:26.146 "uuid": "970f8e24-8a4f-400b-a68a-fe4fd0da65e7", 00:11:26.146 "is_configured": true, 00:11:26.146 "data_offset": 2048, 00:11:26.146 "data_size": 63488 00:11:26.146 }, 00:11:26.146 { 00:11:26.146 "name": "BaseBdev3", 00:11:26.146 "uuid": "dfb6670c-9a83-40f3-b755-95387f8b3244", 00:11:26.146 "is_configured": true, 00:11:26.146 "data_offset": 2048, 00:11:26.146 "data_size": 63488 00:11:26.146 } 00:11:26.146 ] 00:11:26.146 }' 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.146 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.423 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:26.423 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:26.423 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:26.423 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:26.423 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:26.423 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:26.423 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:26.423 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:26.423 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.423 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.423 [2024-12-06 04:02:19.738062] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.423 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.423 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:26.423 "name": "Existed_Raid", 00:11:26.423 "aliases": [ 00:11:26.423 "c617756e-5d55-4e1c-93a7-f6d8d8fd4b4b" 00:11:26.423 ], 00:11:26.423 "product_name": "Raid Volume", 00:11:26.423 "block_size": 512, 00:11:26.423 "num_blocks": 63488, 00:11:26.423 "uuid": "c617756e-5d55-4e1c-93a7-f6d8d8fd4b4b", 00:11:26.423 "assigned_rate_limits": { 00:11:26.423 "rw_ios_per_sec": 0, 00:11:26.423 "rw_mbytes_per_sec": 0, 00:11:26.423 "r_mbytes_per_sec": 0, 00:11:26.423 "w_mbytes_per_sec": 0 00:11:26.423 }, 00:11:26.423 "claimed": false, 00:11:26.423 "zoned": false, 00:11:26.423 "supported_io_types": { 00:11:26.423 "read": true, 00:11:26.423 "write": true, 00:11:26.423 "unmap": false, 00:11:26.423 "flush": false, 00:11:26.423 "reset": true, 00:11:26.423 "nvme_admin": false, 00:11:26.423 "nvme_io": false, 00:11:26.423 "nvme_io_md": false, 00:11:26.423 "write_zeroes": true, 00:11:26.423 "zcopy": false, 00:11:26.423 "get_zone_info": false, 00:11:26.423 "zone_management": false, 00:11:26.423 "zone_append": false, 00:11:26.423 "compare": false, 00:11:26.423 "compare_and_write": false, 00:11:26.423 "abort": false, 00:11:26.423 "seek_hole": false, 00:11:26.423 "seek_data": false, 00:11:26.423 "copy": false, 00:11:26.423 "nvme_iov_md": false 00:11:26.423 }, 00:11:26.423 "memory_domains": [ 00:11:26.423 { 00:11:26.423 "dma_device_id": "system", 00:11:26.423 "dma_device_type": 1 00:11:26.423 }, 00:11:26.423 { 00:11:26.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.423 "dma_device_type": 2 00:11:26.423 }, 00:11:26.423 { 00:11:26.423 "dma_device_id": "system", 00:11:26.423 "dma_device_type": 1 00:11:26.423 }, 00:11:26.423 { 00:11:26.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.423 "dma_device_type": 2 00:11:26.423 }, 00:11:26.423 { 00:11:26.423 "dma_device_id": "system", 00:11:26.423 "dma_device_type": 1 00:11:26.423 }, 00:11:26.423 { 00:11:26.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.423 "dma_device_type": 2 00:11:26.423 } 00:11:26.423 ], 00:11:26.423 "driver_specific": { 00:11:26.423 "raid": { 00:11:26.423 "uuid": "c617756e-5d55-4e1c-93a7-f6d8d8fd4b4b", 00:11:26.423 "strip_size_kb": 0, 00:11:26.423 "state": "online", 00:11:26.423 "raid_level": "raid1", 00:11:26.423 "superblock": true, 00:11:26.423 "num_base_bdevs": 3, 00:11:26.423 "num_base_bdevs_discovered": 3, 00:11:26.423 "num_base_bdevs_operational": 3, 00:11:26.423 "base_bdevs_list": [ 00:11:26.423 { 00:11:26.423 "name": "BaseBdev1", 00:11:26.423 "uuid": "082c757a-f6d1-401d-89e8-8eb8059a8a87", 00:11:26.423 "is_configured": true, 00:11:26.423 "data_offset": 2048, 00:11:26.423 "data_size": 63488 00:11:26.423 }, 00:11:26.423 { 00:11:26.423 "name": "BaseBdev2", 00:11:26.423 "uuid": "970f8e24-8a4f-400b-a68a-fe4fd0da65e7", 00:11:26.423 "is_configured": true, 00:11:26.423 "data_offset": 2048, 00:11:26.423 "data_size": 63488 00:11:26.423 }, 00:11:26.423 { 00:11:26.423 "name": "BaseBdev3", 00:11:26.423 "uuid": "dfb6670c-9a83-40f3-b755-95387f8b3244", 00:11:26.423 "is_configured": true, 00:11:26.423 "data_offset": 2048, 00:11:26.423 "data_size": 63488 00:11:26.423 } 00:11:26.423 ] 00:11:26.423 } 00:11:26.423 } 00:11:26.423 }' 00:11:26.423 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:26.683 BaseBdev2 00:11:26.683 BaseBdev3' 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.683 04:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.683 [2024-12-06 04:02:19.997421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.942 "name": "Existed_Raid", 00:11:26.942 "uuid": "c617756e-5d55-4e1c-93a7-f6d8d8fd4b4b", 00:11:26.942 "strip_size_kb": 0, 00:11:26.942 "state": "online", 00:11:26.942 "raid_level": "raid1", 00:11:26.942 "superblock": true, 00:11:26.942 "num_base_bdevs": 3, 00:11:26.942 "num_base_bdevs_discovered": 2, 00:11:26.942 "num_base_bdevs_operational": 2, 00:11:26.942 "base_bdevs_list": [ 00:11:26.942 { 00:11:26.942 "name": null, 00:11:26.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.942 "is_configured": false, 00:11:26.942 "data_offset": 0, 00:11:26.942 "data_size": 63488 00:11:26.942 }, 00:11:26.942 { 00:11:26.942 "name": "BaseBdev2", 00:11:26.942 "uuid": "970f8e24-8a4f-400b-a68a-fe4fd0da65e7", 00:11:26.942 "is_configured": true, 00:11:26.942 "data_offset": 2048, 00:11:26.942 "data_size": 63488 00:11:26.942 }, 00:11:26.942 { 00:11:26.942 "name": "BaseBdev3", 00:11:26.942 "uuid": "dfb6670c-9a83-40f3-b755-95387f8b3244", 00:11:26.942 "is_configured": true, 00:11:26.942 "data_offset": 2048, 00:11:26.942 "data_size": 63488 00:11:26.942 } 00:11:26.942 ] 00:11:26.942 }' 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.942 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.201 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:27.201 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.201 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.201 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.201 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.201 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:27.201 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.460 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:27.460 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:27.460 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:27.460 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.460 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.460 [2024-12-06 04:02:20.576729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:27.460 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.460 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:27.460 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.460 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:27.460 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.460 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.460 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.460 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.460 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:27.460 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:27.460 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:27.460 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.460 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.460 [2024-12-06 04:02:20.744083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:27.460 [2024-12-06 04:02:20.744277] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:27.720 [2024-12-06 04:02:20.852943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.720 [2024-12-06 04:02:20.853145] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:27.720 [2024-12-06 04:02:20.853209] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:27.720 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.720 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:27.720 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.720 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:27.720 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.720 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.720 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.720 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.720 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:27.720 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:27.720 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:27.720 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:27.720 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.720 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:27.720 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.720 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.720 BaseBdev2 00:11:27.720 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.720 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:27.720 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:27.720 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.720 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:27.720 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.720 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.721 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.721 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.721 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.721 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.721 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:27.721 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.721 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.721 [ 00:11:27.721 { 00:11:27.721 "name": "BaseBdev2", 00:11:27.721 "aliases": [ 00:11:27.721 "02c2727b-e3cc-406f-bbf7-09f4a94f263a" 00:11:27.721 ], 00:11:27.721 "product_name": "Malloc disk", 00:11:27.721 "block_size": 512, 00:11:27.721 "num_blocks": 65536, 00:11:27.721 "uuid": "02c2727b-e3cc-406f-bbf7-09f4a94f263a", 00:11:27.721 "assigned_rate_limits": { 00:11:27.721 "rw_ios_per_sec": 0, 00:11:27.721 "rw_mbytes_per_sec": 0, 00:11:27.721 "r_mbytes_per_sec": 0, 00:11:27.721 "w_mbytes_per_sec": 0 00:11:27.721 }, 00:11:27.721 "claimed": false, 00:11:27.721 "zoned": false, 00:11:27.721 "supported_io_types": { 00:11:27.721 "read": true, 00:11:27.721 "write": true, 00:11:27.721 "unmap": true, 00:11:27.721 "flush": true, 00:11:27.721 "reset": true, 00:11:27.721 "nvme_admin": false, 00:11:27.721 "nvme_io": false, 00:11:27.721 "nvme_io_md": false, 00:11:27.721 "write_zeroes": true, 00:11:27.721 "zcopy": true, 00:11:27.721 "get_zone_info": false, 00:11:27.721 "zone_management": false, 00:11:27.721 "zone_append": false, 00:11:27.721 "compare": false, 00:11:27.721 "compare_and_write": false, 00:11:27.721 "abort": true, 00:11:27.721 "seek_hole": false, 00:11:27.721 "seek_data": false, 00:11:27.721 "copy": true, 00:11:27.721 "nvme_iov_md": false 00:11:27.721 }, 00:11:27.721 "memory_domains": [ 00:11:27.721 { 00:11:27.721 "dma_device_id": "system", 00:11:27.721 "dma_device_type": 1 00:11:27.721 }, 00:11:27.721 { 00:11:27.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.721 "dma_device_type": 2 00:11:27.721 } 00:11:27.721 ], 00:11:27.721 "driver_specific": {} 00:11:27.721 } 00:11:27.721 ] 00:11:27.721 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.721 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:27.721 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:27.721 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.721 04:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:27.721 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.721 04:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.721 BaseBdev3 00:11:27.721 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.721 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:27.721 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:27.721 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.721 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:27.721 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.721 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.721 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.721 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.721 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.721 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.721 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:27.721 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.721 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.721 [ 00:11:27.721 { 00:11:27.721 "name": "BaseBdev3", 00:11:27.721 "aliases": [ 00:11:27.721 "8b66c66a-bcac-4e30-9d37-17e1193ae3d9" 00:11:27.721 ], 00:11:27.721 "product_name": "Malloc disk", 00:11:27.721 "block_size": 512, 00:11:27.721 "num_blocks": 65536, 00:11:27.721 "uuid": "8b66c66a-bcac-4e30-9d37-17e1193ae3d9", 00:11:27.721 "assigned_rate_limits": { 00:11:27.721 "rw_ios_per_sec": 0, 00:11:27.721 "rw_mbytes_per_sec": 0, 00:11:27.721 "r_mbytes_per_sec": 0, 00:11:27.721 "w_mbytes_per_sec": 0 00:11:27.721 }, 00:11:27.721 "claimed": false, 00:11:27.721 "zoned": false, 00:11:27.721 "supported_io_types": { 00:11:27.721 "read": true, 00:11:27.721 "write": true, 00:11:27.721 "unmap": true, 00:11:27.721 "flush": true, 00:11:27.721 "reset": true, 00:11:27.721 "nvme_admin": false, 00:11:27.721 "nvme_io": false, 00:11:27.721 "nvme_io_md": false, 00:11:27.721 "write_zeroes": true, 00:11:27.721 "zcopy": true, 00:11:27.980 "get_zone_info": false, 00:11:27.980 "zone_management": false, 00:11:27.980 "zone_append": false, 00:11:27.980 "compare": false, 00:11:27.981 "compare_and_write": false, 00:11:27.981 "abort": true, 00:11:27.981 "seek_hole": false, 00:11:27.981 "seek_data": false, 00:11:27.981 "copy": true, 00:11:27.981 "nvme_iov_md": false 00:11:27.981 }, 00:11:27.981 "memory_domains": [ 00:11:27.981 { 00:11:27.981 "dma_device_id": "system", 00:11:27.981 "dma_device_type": 1 00:11:27.981 }, 00:11:27.981 { 00:11:27.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.981 "dma_device_type": 2 00:11:27.981 } 00:11:27.981 ], 00:11:27.981 "driver_specific": {} 00:11:27.981 } 00:11:27.981 ] 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.981 [2024-12-06 04:02:21.087868] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:27.981 [2024-12-06 04:02:21.087978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:27.981 [2024-12-06 04:02:21.088025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.981 [2024-12-06 04:02:21.090100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.981 "name": "Existed_Raid", 00:11:27.981 "uuid": "0fc9d19d-5989-4c49-8000-b21fd49f7c0c", 00:11:27.981 "strip_size_kb": 0, 00:11:27.981 "state": "configuring", 00:11:27.981 "raid_level": "raid1", 00:11:27.981 "superblock": true, 00:11:27.981 "num_base_bdevs": 3, 00:11:27.981 "num_base_bdevs_discovered": 2, 00:11:27.981 "num_base_bdevs_operational": 3, 00:11:27.981 "base_bdevs_list": [ 00:11:27.981 { 00:11:27.981 "name": "BaseBdev1", 00:11:27.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.981 "is_configured": false, 00:11:27.981 "data_offset": 0, 00:11:27.981 "data_size": 0 00:11:27.981 }, 00:11:27.981 { 00:11:27.981 "name": "BaseBdev2", 00:11:27.981 "uuid": "02c2727b-e3cc-406f-bbf7-09f4a94f263a", 00:11:27.981 "is_configured": true, 00:11:27.981 "data_offset": 2048, 00:11:27.981 "data_size": 63488 00:11:27.981 }, 00:11:27.981 { 00:11:27.981 "name": "BaseBdev3", 00:11:27.981 "uuid": "8b66c66a-bcac-4e30-9d37-17e1193ae3d9", 00:11:27.981 "is_configured": true, 00:11:27.981 "data_offset": 2048, 00:11:27.981 "data_size": 63488 00:11:27.981 } 00:11:27.981 ] 00:11:27.981 }' 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.981 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.241 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:28.241 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.241 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.241 [2024-12-06 04:02:21.491219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:28.241 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.241 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:28.241 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.241 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.241 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.241 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.241 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.241 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.241 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.241 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.241 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.241 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.241 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.241 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.241 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.241 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.241 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.241 "name": "Existed_Raid", 00:11:28.241 "uuid": "0fc9d19d-5989-4c49-8000-b21fd49f7c0c", 00:11:28.241 "strip_size_kb": 0, 00:11:28.241 "state": "configuring", 00:11:28.241 "raid_level": "raid1", 00:11:28.241 "superblock": true, 00:11:28.241 "num_base_bdevs": 3, 00:11:28.241 "num_base_bdevs_discovered": 1, 00:11:28.241 "num_base_bdevs_operational": 3, 00:11:28.241 "base_bdevs_list": [ 00:11:28.241 { 00:11:28.241 "name": "BaseBdev1", 00:11:28.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.241 "is_configured": false, 00:11:28.241 "data_offset": 0, 00:11:28.241 "data_size": 0 00:11:28.241 }, 00:11:28.241 { 00:11:28.241 "name": null, 00:11:28.241 "uuid": "02c2727b-e3cc-406f-bbf7-09f4a94f263a", 00:11:28.241 "is_configured": false, 00:11:28.241 "data_offset": 0, 00:11:28.241 "data_size": 63488 00:11:28.241 }, 00:11:28.241 { 00:11:28.241 "name": "BaseBdev3", 00:11:28.241 "uuid": "8b66c66a-bcac-4e30-9d37-17e1193ae3d9", 00:11:28.241 "is_configured": true, 00:11:28.241 "data_offset": 2048, 00:11:28.241 "data_size": 63488 00:11:28.241 } 00:11:28.241 ] 00:11:28.241 }' 00:11:28.241 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.241 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.811 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.811 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.811 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.811 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:28.811 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.811 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:28.811 04:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:28.811 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.811 04:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.811 [2024-12-06 04:02:22.008101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.811 BaseBdev1 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.811 [ 00:11:28.811 { 00:11:28.811 "name": "BaseBdev1", 00:11:28.811 "aliases": [ 00:11:28.811 "91a3cae8-9738-4645-8ce9-75a919b67fec" 00:11:28.811 ], 00:11:28.811 "product_name": "Malloc disk", 00:11:28.811 "block_size": 512, 00:11:28.811 "num_blocks": 65536, 00:11:28.811 "uuid": "91a3cae8-9738-4645-8ce9-75a919b67fec", 00:11:28.811 "assigned_rate_limits": { 00:11:28.811 "rw_ios_per_sec": 0, 00:11:28.811 "rw_mbytes_per_sec": 0, 00:11:28.811 "r_mbytes_per_sec": 0, 00:11:28.811 "w_mbytes_per_sec": 0 00:11:28.811 }, 00:11:28.811 "claimed": true, 00:11:28.811 "claim_type": "exclusive_write", 00:11:28.811 "zoned": false, 00:11:28.811 "supported_io_types": { 00:11:28.811 "read": true, 00:11:28.811 "write": true, 00:11:28.811 "unmap": true, 00:11:28.811 "flush": true, 00:11:28.811 "reset": true, 00:11:28.811 "nvme_admin": false, 00:11:28.811 "nvme_io": false, 00:11:28.811 "nvme_io_md": false, 00:11:28.811 "write_zeroes": true, 00:11:28.811 "zcopy": true, 00:11:28.811 "get_zone_info": false, 00:11:28.811 "zone_management": false, 00:11:28.811 "zone_append": false, 00:11:28.811 "compare": false, 00:11:28.811 "compare_and_write": false, 00:11:28.811 "abort": true, 00:11:28.811 "seek_hole": false, 00:11:28.811 "seek_data": false, 00:11:28.811 "copy": true, 00:11:28.811 "nvme_iov_md": false 00:11:28.811 }, 00:11:28.811 "memory_domains": [ 00:11:28.811 { 00:11:28.811 "dma_device_id": "system", 00:11:28.811 "dma_device_type": 1 00:11:28.811 }, 00:11:28.811 { 00:11:28.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.811 "dma_device_type": 2 00:11:28.811 } 00:11:28.811 ], 00:11:28.811 "driver_specific": {} 00:11:28.811 } 00:11:28.811 ] 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.811 "name": "Existed_Raid", 00:11:28.811 "uuid": "0fc9d19d-5989-4c49-8000-b21fd49f7c0c", 00:11:28.811 "strip_size_kb": 0, 00:11:28.811 "state": "configuring", 00:11:28.811 "raid_level": "raid1", 00:11:28.811 "superblock": true, 00:11:28.811 "num_base_bdevs": 3, 00:11:28.811 "num_base_bdevs_discovered": 2, 00:11:28.811 "num_base_bdevs_operational": 3, 00:11:28.811 "base_bdevs_list": [ 00:11:28.811 { 00:11:28.811 "name": "BaseBdev1", 00:11:28.811 "uuid": "91a3cae8-9738-4645-8ce9-75a919b67fec", 00:11:28.811 "is_configured": true, 00:11:28.811 "data_offset": 2048, 00:11:28.811 "data_size": 63488 00:11:28.811 }, 00:11:28.811 { 00:11:28.811 "name": null, 00:11:28.811 "uuid": "02c2727b-e3cc-406f-bbf7-09f4a94f263a", 00:11:28.811 "is_configured": false, 00:11:28.811 "data_offset": 0, 00:11:28.811 "data_size": 63488 00:11:28.811 }, 00:11:28.811 { 00:11:28.811 "name": "BaseBdev3", 00:11:28.811 "uuid": "8b66c66a-bcac-4e30-9d37-17e1193ae3d9", 00:11:28.811 "is_configured": true, 00:11:28.811 "data_offset": 2048, 00:11:28.811 "data_size": 63488 00:11:28.811 } 00:11:28.811 ] 00:11:28.811 }' 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.811 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.382 [2024-12-06 04:02:22.487293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.382 "name": "Existed_Raid", 00:11:29.382 "uuid": "0fc9d19d-5989-4c49-8000-b21fd49f7c0c", 00:11:29.382 "strip_size_kb": 0, 00:11:29.382 "state": "configuring", 00:11:29.382 "raid_level": "raid1", 00:11:29.382 "superblock": true, 00:11:29.382 "num_base_bdevs": 3, 00:11:29.382 "num_base_bdevs_discovered": 1, 00:11:29.382 "num_base_bdevs_operational": 3, 00:11:29.382 "base_bdevs_list": [ 00:11:29.382 { 00:11:29.382 "name": "BaseBdev1", 00:11:29.382 "uuid": "91a3cae8-9738-4645-8ce9-75a919b67fec", 00:11:29.382 "is_configured": true, 00:11:29.382 "data_offset": 2048, 00:11:29.382 "data_size": 63488 00:11:29.382 }, 00:11:29.382 { 00:11:29.382 "name": null, 00:11:29.382 "uuid": "02c2727b-e3cc-406f-bbf7-09f4a94f263a", 00:11:29.382 "is_configured": false, 00:11:29.382 "data_offset": 0, 00:11:29.382 "data_size": 63488 00:11:29.382 }, 00:11:29.382 { 00:11:29.382 "name": null, 00:11:29.382 "uuid": "8b66c66a-bcac-4e30-9d37-17e1193ae3d9", 00:11:29.382 "is_configured": false, 00:11:29.382 "data_offset": 0, 00:11:29.382 "data_size": 63488 00:11:29.382 } 00:11:29.382 ] 00:11:29.382 }' 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.382 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.643 [2024-12-06 04:02:22.974484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.643 04:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.903 04:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.903 04:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.903 "name": "Existed_Raid", 00:11:29.903 "uuid": "0fc9d19d-5989-4c49-8000-b21fd49f7c0c", 00:11:29.903 "strip_size_kb": 0, 00:11:29.903 "state": "configuring", 00:11:29.903 "raid_level": "raid1", 00:11:29.903 "superblock": true, 00:11:29.903 "num_base_bdevs": 3, 00:11:29.903 "num_base_bdevs_discovered": 2, 00:11:29.903 "num_base_bdevs_operational": 3, 00:11:29.903 "base_bdevs_list": [ 00:11:29.903 { 00:11:29.903 "name": "BaseBdev1", 00:11:29.903 "uuid": "91a3cae8-9738-4645-8ce9-75a919b67fec", 00:11:29.903 "is_configured": true, 00:11:29.903 "data_offset": 2048, 00:11:29.903 "data_size": 63488 00:11:29.903 }, 00:11:29.903 { 00:11:29.903 "name": null, 00:11:29.903 "uuid": "02c2727b-e3cc-406f-bbf7-09f4a94f263a", 00:11:29.903 "is_configured": false, 00:11:29.903 "data_offset": 0, 00:11:29.903 "data_size": 63488 00:11:29.903 }, 00:11:29.903 { 00:11:29.903 "name": "BaseBdev3", 00:11:29.903 "uuid": "8b66c66a-bcac-4e30-9d37-17e1193ae3d9", 00:11:29.903 "is_configured": true, 00:11:29.903 "data_offset": 2048, 00:11:29.903 "data_size": 63488 00:11:29.903 } 00:11:29.903 ] 00:11:29.903 }' 00:11:29.903 04:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.903 04:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.163 04:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:30.163 04:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.163 04:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.163 04:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.163 04:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.163 04:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:30.163 04:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:30.163 04:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.163 04:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.163 [2024-12-06 04:02:23.449682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:30.422 04:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.423 04:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:30.423 04:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.423 04:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.423 04:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.423 04:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.423 04:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.423 04:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.423 04:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.423 04:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.423 04:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.423 04:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.423 04:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.423 04:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.423 04:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.423 04:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.423 04:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.423 "name": "Existed_Raid", 00:11:30.423 "uuid": "0fc9d19d-5989-4c49-8000-b21fd49f7c0c", 00:11:30.423 "strip_size_kb": 0, 00:11:30.423 "state": "configuring", 00:11:30.423 "raid_level": "raid1", 00:11:30.423 "superblock": true, 00:11:30.423 "num_base_bdevs": 3, 00:11:30.423 "num_base_bdevs_discovered": 1, 00:11:30.423 "num_base_bdevs_operational": 3, 00:11:30.423 "base_bdevs_list": [ 00:11:30.423 { 00:11:30.423 "name": null, 00:11:30.423 "uuid": "91a3cae8-9738-4645-8ce9-75a919b67fec", 00:11:30.423 "is_configured": false, 00:11:30.423 "data_offset": 0, 00:11:30.423 "data_size": 63488 00:11:30.423 }, 00:11:30.423 { 00:11:30.423 "name": null, 00:11:30.423 "uuid": "02c2727b-e3cc-406f-bbf7-09f4a94f263a", 00:11:30.423 "is_configured": false, 00:11:30.423 "data_offset": 0, 00:11:30.423 "data_size": 63488 00:11:30.423 }, 00:11:30.423 { 00:11:30.423 "name": "BaseBdev3", 00:11:30.423 "uuid": "8b66c66a-bcac-4e30-9d37-17e1193ae3d9", 00:11:30.423 "is_configured": true, 00:11:30.423 "data_offset": 2048, 00:11:30.423 "data_size": 63488 00:11:30.423 } 00:11:30.423 ] 00:11:30.423 }' 00:11:30.423 04:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.423 04:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.683 04:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.683 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.683 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.683 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:30.683 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.943 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:30.943 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:30.943 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.943 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.943 [2024-12-06 04:02:24.056076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.943 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.943 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:30.943 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.943 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.943 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.943 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.943 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.943 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.943 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.943 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.943 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.943 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.943 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.943 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.943 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.943 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.943 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.943 "name": "Existed_Raid", 00:11:30.943 "uuid": "0fc9d19d-5989-4c49-8000-b21fd49f7c0c", 00:11:30.943 "strip_size_kb": 0, 00:11:30.943 "state": "configuring", 00:11:30.943 "raid_level": "raid1", 00:11:30.943 "superblock": true, 00:11:30.943 "num_base_bdevs": 3, 00:11:30.943 "num_base_bdevs_discovered": 2, 00:11:30.943 "num_base_bdevs_operational": 3, 00:11:30.943 "base_bdevs_list": [ 00:11:30.943 { 00:11:30.943 "name": null, 00:11:30.943 "uuid": "91a3cae8-9738-4645-8ce9-75a919b67fec", 00:11:30.943 "is_configured": false, 00:11:30.943 "data_offset": 0, 00:11:30.943 "data_size": 63488 00:11:30.943 }, 00:11:30.943 { 00:11:30.943 "name": "BaseBdev2", 00:11:30.943 "uuid": "02c2727b-e3cc-406f-bbf7-09f4a94f263a", 00:11:30.943 "is_configured": true, 00:11:30.943 "data_offset": 2048, 00:11:30.943 "data_size": 63488 00:11:30.943 }, 00:11:30.943 { 00:11:30.943 "name": "BaseBdev3", 00:11:30.943 "uuid": "8b66c66a-bcac-4e30-9d37-17e1193ae3d9", 00:11:30.943 "is_configured": true, 00:11:30.943 "data_offset": 2048, 00:11:30.943 "data_size": 63488 00:11:30.943 } 00:11:30.943 ] 00:11:30.943 }' 00:11:30.943 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.943 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.202 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.202 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.202 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.202 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:31.202 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.202 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:31.202 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:31.202 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.202 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.202 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.202 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.202 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 91a3cae8-9738-4645-8ce9-75a919b67fec 00:11:31.202 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.202 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.462 [2024-12-06 04:02:24.595959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:31.462 [2024-12-06 04:02:24.596356] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:31.462 [2024-12-06 04:02:24.596419] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:31.462 [2024-12-06 04:02:24.596740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:31.462 [2024-12-06 04:02:24.596949] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:31.462 NewBaseBdev 00:11:31.462 [2024-12-06 04:02:24.597006] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:31.462 [2024-12-06 04:02:24.597231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.462 [ 00:11:31.462 { 00:11:31.462 "name": "NewBaseBdev", 00:11:31.462 "aliases": [ 00:11:31.462 "91a3cae8-9738-4645-8ce9-75a919b67fec" 00:11:31.462 ], 00:11:31.462 "product_name": "Malloc disk", 00:11:31.462 "block_size": 512, 00:11:31.462 "num_blocks": 65536, 00:11:31.462 "uuid": "91a3cae8-9738-4645-8ce9-75a919b67fec", 00:11:31.462 "assigned_rate_limits": { 00:11:31.462 "rw_ios_per_sec": 0, 00:11:31.462 "rw_mbytes_per_sec": 0, 00:11:31.462 "r_mbytes_per_sec": 0, 00:11:31.462 "w_mbytes_per_sec": 0 00:11:31.462 }, 00:11:31.462 "claimed": true, 00:11:31.462 "claim_type": "exclusive_write", 00:11:31.462 "zoned": false, 00:11:31.462 "supported_io_types": { 00:11:31.462 "read": true, 00:11:31.462 "write": true, 00:11:31.462 "unmap": true, 00:11:31.462 "flush": true, 00:11:31.462 "reset": true, 00:11:31.462 "nvme_admin": false, 00:11:31.462 "nvme_io": false, 00:11:31.462 "nvme_io_md": false, 00:11:31.462 "write_zeroes": true, 00:11:31.462 "zcopy": true, 00:11:31.462 "get_zone_info": false, 00:11:31.462 "zone_management": false, 00:11:31.462 "zone_append": false, 00:11:31.462 "compare": false, 00:11:31.462 "compare_and_write": false, 00:11:31.462 "abort": true, 00:11:31.462 "seek_hole": false, 00:11:31.462 "seek_data": false, 00:11:31.462 "copy": true, 00:11:31.462 "nvme_iov_md": false 00:11:31.462 }, 00:11:31.462 "memory_domains": [ 00:11:31.462 { 00:11:31.462 "dma_device_id": "system", 00:11:31.462 "dma_device_type": 1 00:11:31.462 }, 00:11:31.462 { 00:11:31.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.462 "dma_device_type": 2 00:11:31.462 } 00:11:31.462 ], 00:11:31.462 "driver_specific": {} 00:11:31.462 } 00:11:31.462 ] 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.462 "name": "Existed_Raid", 00:11:31.462 "uuid": "0fc9d19d-5989-4c49-8000-b21fd49f7c0c", 00:11:31.462 "strip_size_kb": 0, 00:11:31.462 "state": "online", 00:11:31.462 "raid_level": "raid1", 00:11:31.462 "superblock": true, 00:11:31.462 "num_base_bdevs": 3, 00:11:31.462 "num_base_bdevs_discovered": 3, 00:11:31.462 "num_base_bdevs_operational": 3, 00:11:31.462 "base_bdevs_list": [ 00:11:31.462 { 00:11:31.462 "name": "NewBaseBdev", 00:11:31.462 "uuid": "91a3cae8-9738-4645-8ce9-75a919b67fec", 00:11:31.462 "is_configured": true, 00:11:31.462 "data_offset": 2048, 00:11:31.462 "data_size": 63488 00:11:31.462 }, 00:11:31.462 { 00:11:31.462 "name": "BaseBdev2", 00:11:31.462 "uuid": "02c2727b-e3cc-406f-bbf7-09f4a94f263a", 00:11:31.462 "is_configured": true, 00:11:31.462 "data_offset": 2048, 00:11:31.462 "data_size": 63488 00:11:31.462 }, 00:11:31.462 { 00:11:31.462 "name": "BaseBdev3", 00:11:31.462 "uuid": "8b66c66a-bcac-4e30-9d37-17e1193ae3d9", 00:11:31.462 "is_configured": true, 00:11:31.462 "data_offset": 2048, 00:11:31.462 "data_size": 63488 00:11:31.462 } 00:11:31.462 ] 00:11:31.462 }' 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.462 04:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:32.030 [2024-12-06 04:02:25.091518] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:32.030 "name": "Existed_Raid", 00:11:32.030 "aliases": [ 00:11:32.030 "0fc9d19d-5989-4c49-8000-b21fd49f7c0c" 00:11:32.030 ], 00:11:32.030 "product_name": "Raid Volume", 00:11:32.030 "block_size": 512, 00:11:32.030 "num_blocks": 63488, 00:11:32.030 "uuid": "0fc9d19d-5989-4c49-8000-b21fd49f7c0c", 00:11:32.030 "assigned_rate_limits": { 00:11:32.030 "rw_ios_per_sec": 0, 00:11:32.030 "rw_mbytes_per_sec": 0, 00:11:32.030 "r_mbytes_per_sec": 0, 00:11:32.030 "w_mbytes_per_sec": 0 00:11:32.030 }, 00:11:32.030 "claimed": false, 00:11:32.030 "zoned": false, 00:11:32.030 "supported_io_types": { 00:11:32.030 "read": true, 00:11:32.030 "write": true, 00:11:32.030 "unmap": false, 00:11:32.030 "flush": false, 00:11:32.030 "reset": true, 00:11:32.030 "nvme_admin": false, 00:11:32.030 "nvme_io": false, 00:11:32.030 "nvme_io_md": false, 00:11:32.030 "write_zeroes": true, 00:11:32.030 "zcopy": false, 00:11:32.030 "get_zone_info": false, 00:11:32.030 "zone_management": false, 00:11:32.030 "zone_append": false, 00:11:32.030 "compare": false, 00:11:32.030 "compare_and_write": false, 00:11:32.030 "abort": false, 00:11:32.030 "seek_hole": false, 00:11:32.030 "seek_data": false, 00:11:32.030 "copy": false, 00:11:32.030 "nvme_iov_md": false 00:11:32.030 }, 00:11:32.030 "memory_domains": [ 00:11:32.030 { 00:11:32.030 "dma_device_id": "system", 00:11:32.030 "dma_device_type": 1 00:11:32.030 }, 00:11:32.030 { 00:11:32.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.030 "dma_device_type": 2 00:11:32.030 }, 00:11:32.030 { 00:11:32.030 "dma_device_id": "system", 00:11:32.030 "dma_device_type": 1 00:11:32.030 }, 00:11:32.030 { 00:11:32.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.030 "dma_device_type": 2 00:11:32.030 }, 00:11:32.030 { 00:11:32.030 "dma_device_id": "system", 00:11:32.030 "dma_device_type": 1 00:11:32.030 }, 00:11:32.030 { 00:11:32.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.030 "dma_device_type": 2 00:11:32.030 } 00:11:32.030 ], 00:11:32.030 "driver_specific": { 00:11:32.030 "raid": { 00:11:32.030 "uuid": "0fc9d19d-5989-4c49-8000-b21fd49f7c0c", 00:11:32.030 "strip_size_kb": 0, 00:11:32.030 "state": "online", 00:11:32.030 "raid_level": "raid1", 00:11:32.030 "superblock": true, 00:11:32.030 "num_base_bdevs": 3, 00:11:32.030 "num_base_bdevs_discovered": 3, 00:11:32.030 "num_base_bdevs_operational": 3, 00:11:32.030 "base_bdevs_list": [ 00:11:32.030 { 00:11:32.030 "name": "NewBaseBdev", 00:11:32.030 "uuid": "91a3cae8-9738-4645-8ce9-75a919b67fec", 00:11:32.030 "is_configured": true, 00:11:32.030 "data_offset": 2048, 00:11:32.030 "data_size": 63488 00:11:32.030 }, 00:11:32.030 { 00:11:32.030 "name": "BaseBdev2", 00:11:32.030 "uuid": "02c2727b-e3cc-406f-bbf7-09f4a94f263a", 00:11:32.030 "is_configured": true, 00:11:32.030 "data_offset": 2048, 00:11:32.030 "data_size": 63488 00:11:32.030 }, 00:11:32.030 { 00:11:32.030 "name": "BaseBdev3", 00:11:32.030 "uuid": "8b66c66a-bcac-4e30-9d37-17e1193ae3d9", 00:11:32.030 "is_configured": true, 00:11:32.030 "data_offset": 2048, 00:11:32.030 "data_size": 63488 00:11:32.030 } 00:11:32.030 ] 00:11:32.030 } 00:11:32.030 } 00:11:32.030 }' 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:32.030 BaseBdev2 00:11:32.030 BaseBdev3' 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.030 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.031 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.031 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.031 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.031 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.031 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.031 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:32.031 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.031 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.031 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.031 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.031 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.031 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.031 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:32.031 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.031 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.031 [2024-12-06 04:02:25.350729] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:32.031 [2024-12-06 04:02:25.350806] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.031 [2024-12-06 04:02:25.350901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.031 [2024-12-06 04:02:25.351255] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.031 [2024-12-06 04:02:25.351318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:32.031 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.031 04:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68113 00:11:32.031 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68113 ']' 00:11:32.031 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68113 00:11:32.031 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:32.031 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.031 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68113 00:11:32.291 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.291 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.291 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68113' 00:11:32.291 killing process with pid 68113 00:11:32.291 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68113 00:11:32.291 [2024-12-06 04:02:25.386213] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:32.291 04:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68113 00:11:32.550 [2024-12-06 04:02:25.681447] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:33.487 04:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:33.487 00:11:33.487 real 0m10.477s 00:11:33.487 user 0m16.556s 00:11:33.487 sys 0m1.808s 00:11:33.487 04:02:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.487 04:02:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.487 ************************************ 00:11:33.487 END TEST raid_state_function_test_sb 00:11:33.487 ************************************ 00:11:33.746 04:02:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:11:33.746 04:02:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:33.746 04:02:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.746 04:02:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:33.746 ************************************ 00:11:33.746 START TEST raid_superblock_test 00:11:33.746 ************************************ 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68728 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68728 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68728 ']' 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:33.746 04:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.746 [2024-12-06 04:02:26.950914] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:11:33.746 [2024-12-06 04:02:26.951140] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68728 ] 00:11:34.004 [2024-12-06 04:02:27.123993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.004 [2024-12-06 04:02:27.239348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.262 [2024-12-06 04:02:27.433536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.262 [2024-12-06 04:02:27.433683] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.520 malloc1 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.520 [2024-12-06 04:02:27.828699] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:34.520 [2024-12-06 04:02:27.828809] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.520 [2024-12-06 04:02:27.828848] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:34.520 [2024-12-06 04:02:27.828879] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.520 [2024-12-06 04:02:27.830982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.520 [2024-12-06 04:02:27.831075] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:34.520 pt1 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.520 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.778 malloc2 00:11:34.778 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.778 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:34.778 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.778 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.778 [2024-12-06 04:02:27.886256] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:34.778 [2024-12-06 04:02:27.886318] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.779 [2024-12-06 04:02:27.886344] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:34.779 [2024-12-06 04:02:27.886353] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.779 [2024-12-06 04:02:27.888563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.779 [2024-12-06 04:02:27.888644] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:34.779 pt2 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.779 malloc3 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.779 [2024-12-06 04:02:27.953778] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:34.779 [2024-12-06 04:02:27.953883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.779 [2024-12-06 04:02:27.953908] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:34.779 [2024-12-06 04:02:27.953917] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.779 [2024-12-06 04:02:27.956004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.779 [2024-12-06 04:02:27.956039] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:34.779 pt3 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.779 [2024-12-06 04:02:27.961815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:34.779 [2024-12-06 04:02:27.963613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:34.779 [2024-12-06 04:02:27.963692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:34.779 [2024-12-06 04:02:27.963846] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:34.779 [2024-12-06 04:02:27.963864] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:34.779 [2024-12-06 04:02:27.964131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:34.779 [2024-12-06 04:02:27.964339] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:34.779 [2024-12-06 04:02:27.964357] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:34.779 [2024-12-06 04:02:27.964510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.779 04:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.779 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.779 "name": "raid_bdev1", 00:11:34.779 "uuid": "18dd1080-edda-4bcb-a255-0949a81f6a81", 00:11:34.779 "strip_size_kb": 0, 00:11:34.779 "state": "online", 00:11:34.779 "raid_level": "raid1", 00:11:34.779 "superblock": true, 00:11:34.779 "num_base_bdevs": 3, 00:11:34.779 "num_base_bdevs_discovered": 3, 00:11:34.779 "num_base_bdevs_operational": 3, 00:11:34.779 "base_bdevs_list": [ 00:11:34.779 { 00:11:34.779 "name": "pt1", 00:11:34.779 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:34.779 "is_configured": true, 00:11:34.779 "data_offset": 2048, 00:11:34.779 "data_size": 63488 00:11:34.779 }, 00:11:34.779 { 00:11:34.779 "name": "pt2", 00:11:34.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:34.779 "is_configured": true, 00:11:34.779 "data_offset": 2048, 00:11:34.779 "data_size": 63488 00:11:34.779 }, 00:11:34.779 { 00:11:34.779 "name": "pt3", 00:11:34.779 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:34.779 "is_configured": true, 00:11:34.779 "data_offset": 2048, 00:11:34.779 "data_size": 63488 00:11:34.779 } 00:11:34.779 ] 00:11:34.779 }' 00:11:34.779 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.779 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.036 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:35.036 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:35.036 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:35.036 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:35.036 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:35.036 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:35.036 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:35.036 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.036 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.036 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:35.294 [2024-12-06 04:02:28.393396] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:35.294 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.294 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:35.294 "name": "raid_bdev1", 00:11:35.294 "aliases": [ 00:11:35.294 "18dd1080-edda-4bcb-a255-0949a81f6a81" 00:11:35.294 ], 00:11:35.294 "product_name": "Raid Volume", 00:11:35.294 "block_size": 512, 00:11:35.294 "num_blocks": 63488, 00:11:35.294 "uuid": "18dd1080-edda-4bcb-a255-0949a81f6a81", 00:11:35.294 "assigned_rate_limits": { 00:11:35.294 "rw_ios_per_sec": 0, 00:11:35.294 "rw_mbytes_per_sec": 0, 00:11:35.294 "r_mbytes_per_sec": 0, 00:11:35.294 "w_mbytes_per_sec": 0 00:11:35.294 }, 00:11:35.294 "claimed": false, 00:11:35.294 "zoned": false, 00:11:35.294 "supported_io_types": { 00:11:35.294 "read": true, 00:11:35.294 "write": true, 00:11:35.294 "unmap": false, 00:11:35.294 "flush": false, 00:11:35.294 "reset": true, 00:11:35.294 "nvme_admin": false, 00:11:35.294 "nvme_io": false, 00:11:35.294 "nvme_io_md": false, 00:11:35.294 "write_zeroes": true, 00:11:35.294 "zcopy": false, 00:11:35.294 "get_zone_info": false, 00:11:35.294 "zone_management": false, 00:11:35.294 "zone_append": false, 00:11:35.294 "compare": false, 00:11:35.294 "compare_and_write": false, 00:11:35.294 "abort": false, 00:11:35.294 "seek_hole": false, 00:11:35.294 "seek_data": false, 00:11:35.294 "copy": false, 00:11:35.294 "nvme_iov_md": false 00:11:35.294 }, 00:11:35.294 "memory_domains": [ 00:11:35.294 { 00:11:35.294 "dma_device_id": "system", 00:11:35.294 "dma_device_type": 1 00:11:35.294 }, 00:11:35.294 { 00:11:35.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.294 "dma_device_type": 2 00:11:35.294 }, 00:11:35.294 { 00:11:35.294 "dma_device_id": "system", 00:11:35.294 "dma_device_type": 1 00:11:35.294 }, 00:11:35.294 { 00:11:35.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.294 "dma_device_type": 2 00:11:35.294 }, 00:11:35.294 { 00:11:35.294 "dma_device_id": "system", 00:11:35.294 "dma_device_type": 1 00:11:35.294 }, 00:11:35.294 { 00:11:35.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.294 "dma_device_type": 2 00:11:35.294 } 00:11:35.294 ], 00:11:35.294 "driver_specific": { 00:11:35.294 "raid": { 00:11:35.294 "uuid": "18dd1080-edda-4bcb-a255-0949a81f6a81", 00:11:35.294 "strip_size_kb": 0, 00:11:35.294 "state": "online", 00:11:35.294 "raid_level": "raid1", 00:11:35.294 "superblock": true, 00:11:35.294 "num_base_bdevs": 3, 00:11:35.294 "num_base_bdevs_discovered": 3, 00:11:35.294 "num_base_bdevs_operational": 3, 00:11:35.294 "base_bdevs_list": [ 00:11:35.294 { 00:11:35.294 "name": "pt1", 00:11:35.294 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:35.294 "is_configured": true, 00:11:35.294 "data_offset": 2048, 00:11:35.294 "data_size": 63488 00:11:35.294 }, 00:11:35.294 { 00:11:35.294 "name": "pt2", 00:11:35.294 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.294 "is_configured": true, 00:11:35.294 "data_offset": 2048, 00:11:35.294 "data_size": 63488 00:11:35.294 }, 00:11:35.294 { 00:11:35.294 "name": "pt3", 00:11:35.294 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.294 "is_configured": true, 00:11:35.294 "data_offset": 2048, 00:11:35.294 "data_size": 63488 00:11:35.294 } 00:11:35.294 ] 00:11:35.294 } 00:11:35.294 } 00:11:35.294 }' 00:11:35.294 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:35.294 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:35.294 pt2 00:11:35.294 pt3' 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.295 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.295 [2024-12-06 04:02:28.632929] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=18dd1080-edda-4bcb-a255-0949a81f6a81 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 18dd1080-edda-4bcb-a255-0949a81f6a81 ']' 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.554 [2024-12-06 04:02:28.680557] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:35.554 [2024-12-06 04:02:28.680587] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.554 [2024-12-06 04:02:28.680667] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.554 [2024-12-06 04:02:28.680739] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.554 [2024-12-06 04:02:28.680749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.554 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.554 [2024-12-06 04:02:28.820352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:35.554 [2024-12-06 04:02:28.822183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:35.554 [2024-12-06 04:02:28.822242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:35.554 [2024-12-06 04:02:28.822297] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:35.554 [2024-12-06 04:02:28.822368] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:35.554 [2024-12-06 04:02:28.822390] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:35.554 [2024-12-06 04:02:28.822408] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:35.554 [2024-12-06 04:02:28.822419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:35.554 request: 00:11:35.554 { 00:11:35.554 "name": "raid_bdev1", 00:11:35.554 "raid_level": "raid1", 00:11:35.554 "base_bdevs": [ 00:11:35.554 "malloc1", 00:11:35.554 "malloc2", 00:11:35.554 "malloc3" 00:11:35.554 ], 00:11:35.554 "superblock": false, 00:11:35.554 "method": "bdev_raid_create", 00:11:35.554 "req_id": 1 00:11:35.554 } 00:11:35.554 Got JSON-RPC error response 00:11:35.554 response: 00:11:35.554 { 00:11:35.554 "code": -17, 00:11:35.554 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:35.554 } 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.555 [2024-12-06 04:02:28.884190] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:35.555 [2024-12-06 04:02:28.884312] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.555 [2024-12-06 04:02:28.884352] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:35.555 [2024-12-06 04:02:28.884381] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.555 [2024-12-06 04:02:28.886585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.555 [2024-12-06 04:02:28.886654] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:35.555 [2024-12-06 04:02:28.886777] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:35.555 [2024-12-06 04:02:28.886856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:35.555 pt1 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.555 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.813 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.813 "name": "raid_bdev1", 00:11:35.813 "uuid": "18dd1080-edda-4bcb-a255-0949a81f6a81", 00:11:35.813 "strip_size_kb": 0, 00:11:35.813 "state": "configuring", 00:11:35.813 "raid_level": "raid1", 00:11:35.813 "superblock": true, 00:11:35.813 "num_base_bdevs": 3, 00:11:35.813 "num_base_bdevs_discovered": 1, 00:11:35.813 "num_base_bdevs_operational": 3, 00:11:35.813 "base_bdevs_list": [ 00:11:35.813 { 00:11:35.813 "name": "pt1", 00:11:35.813 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:35.813 "is_configured": true, 00:11:35.813 "data_offset": 2048, 00:11:35.813 "data_size": 63488 00:11:35.813 }, 00:11:35.813 { 00:11:35.813 "name": null, 00:11:35.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.813 "is_configured": false, 00:11:35.813 "data_offset": 2048, 00:11:35.813 "data_size": 63488 00:11:35.813 }, 00:11:35.813 { 00:11:35.813 "name": null, 00:11:35.813 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.813 "is_configured": false, 00:11:35.813 "data_offset": 2048, 00:11:35.813 "data_size": 63488 00:11:35.813 } 00:11:35.813 ] 00:11:35.813 }' 00:11:35.813 04:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.813 04:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.072 [2024-12-06 04:02:29.339471] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:36.072 [2024-12-06 04:02:29.339582] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.072 [2024-12-06 04:02:29.339627] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:36.072 [2024-12-06 04:02:29.339659] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.072 [2024-12-06 04:02:29.340166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.072 [2024-12-06 04:02:29.340225] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:36.072 [2024-12-06 04:02:29.340364] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:36.072 [2024-12-06 04:02:29.340420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:36.072 pt2 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.072 [2024-12-06 04:02:29.347459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.072 "name": "raid_bdev1", 00:11:36.072 "uuid": "18dd1080-edda-4bcb-a255-0949a81f6a81", 00:11:36.072 "strip_size_kb": 0, 00:11:36.072 "state": "configuring", 00:11:36.072 "raid_level": "raid1", 00:11:36.072 "superblock": true, 00:11:36.072 "num_base_bdevs": 3, 00:11:36.072 "num_base_bdevs_discovered": 1, 00:11:36.072 "num_base_bdevs_operational": 3, 00:11:36.072 "base_bdevs_list": [ 00:11:36.072 { 00:11:36.072 "name": "pt1", 00:11:36.072 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:36.072 "is_configured": true, 00:11:36.072 "data_offset": 2048, 00:11:36.072 "data_size": 63488 00:11:36.072 }, 00:11:36.072 { 00:11:36.072 "name": null, 00:11:36.072 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.072 "is_configured": false, 00:11:36.072 "data_offset": 0, 00:11:36.072 "data_size": 63488 00:11:36.072 }, 00:11:36.072 { 00:11:36.072 "name": null, 00:11:36.072 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.072 "is_configured": false, 00:11:36.072 "data_offset": 2048, 00:11:36.072 "data_size": 63488 00:11:36.072 } 00:11:36.072 ] 00:11:36.072 }' 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.072 04:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.657 [2024-12-06 04:02:29.790686] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:36.657 [2024-12-06 04:02:29.790766] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.657 [2024-12-06 04:02:29.790788] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:36.657 [2024-12-06 04:02:29.790799] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.657 [2024-12-06 04:02:29.791266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.657 [2024-12-06 04:02:29.791294] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:36.657 [2024-12-06 04:02:29.791397] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:36.657 [2024-12-06 04:02:29.791442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:36.657 pt2 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.657 [2024-12-06 04:02:29.802659] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:36.657 [2024-12-06 04:02:29.802717] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.657 [2024-12-06 04:02:29.802733] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:36.657 [2024-12-06 04:02:29.802744] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.657 [2024-12-06 04:02:29.803190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.657 [2024-12-06 04:02:29.803227] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:36.657 [2024-12-06 04:02:29.803304] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:36.657 [2024-12-06 04:02:29.803329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:36.657 [2024-12-06 04:02:29.803468] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:36.657 [2024-12-06 04:02:29.803488] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:36.657 [2024-12-06 04:02:29.803756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:36.657 [2024-12-06 04:02:29.803932] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:36.657 [2024-12-06 04:02:29.803941] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:36.657 [2024-12-06 04:02:29.804140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.657 pt3 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.657 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.657 "name": "raid_bdev1", 00:11:36.658 "uuid": "18dd1080-edda-4bcb-a255-0949a81f6a81", 00:11:36.658 "strip_size_kb": 0, 00:11:36.658 "state": "online", 00:11:36.658 "raid_level": "raid1", 00:11:36.658 "superblock": true, 00:11:36.658 "num_base_bdevs": 3, 00:11:36.658 "num_base_bdevs_discovered": 3, 00:11:36.658 "num_base_bdevs_operational": 3, 00:11:36.658 "base_bdevs_list": [ 00:11:36.658 { 00:11:36.658 "name": "pt1", 00:11:36.658 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:36.658 "is_configured": true, 00:11:36.658 "data_offset": 2048, 00:11:36.658 "data_size": 63488 00:11:36.658 }, 00:11:36.658 { 00:11:36.658 "name": "pt2", 00:11:36.658 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.658 "is_configured": true, 00:11:36.658 "data_offset": 2048, 00:11:36.658 "data_size": 63488 00:11:36.658 }, 00:11:36.658 { 00:11:36.658 "name": "pt3", 00:11:36.658 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.658 "is_configured": true, 00:11:36.658 "data_offset": 2048, 00:11:36.658 "data_size": 63488 00:11:36.658 } 00:11:36.658 ] 00:11:36.658 }' 00:11:36.658 04:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.658 04:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.917 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:36.917 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:36.917 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.917 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.917 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.917 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.917 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:36.917 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.917 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.917 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.917 [2024-12-06 04:02:30.230293] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.917 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.917 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.917 "name": "raid_bdev1", 00:11:36.917 "aliases": [ 00:11:36.917 "18dd1080-edda-4bcb-a255-0949a81f6a81" 00:11:36.917 ], 00:11:36.917 "product_name": "Raid Volume", 00:11:36.917 "block_size": 512, 00:11:36.917 "num_blocks": 63488, 00:11:36.917 "uuid": "18dd1080-edda-4bcb-a255-0949a81f6a81", 00:11:36.917 "assigned_rate_limits": { 00:11:36.917 "rw_ios_per_sec": 0, 00:11:36.917 "rw_mbytes_per_sec": 0, 00:11:36.917 "r_mbytes_per_sec": 0, 00:11:36.917 "w_mbytes_per_sec": 0 00:11:36.917 }, 00:11:36.917 "claimed": false, 00:11:36.917 "zoned": false, 00:11:36.917 "supported_io_types": { 00:11:36.917 "read": true, 00:11:36.917 "write": true, 00:11:36.917 "unmap": false, 00:11:36.917 "flush": false, 00:11:36.917 "reset": true, 00:11:36.917 "nvme_admin": false, 00:11:36.917 "nvme_io": false, 00:11:36.917 "nvme_io_md": false, 00:11:36.917 "write_zeroes": true, 00:11:36.917 "zcopy": false, 00:11:36.917 "get_zone_info": false, 00:11:36.917 "zone_management": false, 00:11:36.917 "zone_append": false, 00:11:36.917 "compare": false, 00:11:36.917 "compare_and_write": false, 00:11:36.917 "abort": false, 00:11:36.917 "seek_hole": false, 00:11:36.917 "seek_data": false, 00:11:36.917 "copy": false, 00:11:36.917 "nvme_iov_md": false 00:11:36.917 }, 00:11:36.917 "memory_domains": [ 00:11:36.917 { 00:11:36.917 "dma_device_id": "system", 00:11:36.917 "dma_device_type": 1 00:11:36.917 }, 00:11:36.917 { 00:11:36.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.917 "dma_device_type": 2 00:11:36.917 }, 00:11:36.917 { 00:11:36.917 "dma_device_id": "system", 00:11:36.917 "dma_device_type": 1 00:11:36.917 }, 00:11:36.917 { 00:11:36.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.917 "dma_device_type": 2 00:11:36.917 }, 00:11:36.917 { 00:11:36.917 "dma_device_id": "system", 00:11:36.917 "dma_device_type": 1 00:11:36.917 }, 00:11:36.917 { 00:11:36.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.917 "dma_device_type": 2 00:11:36.917 } 00:11:36.917 ], 00:11:36.917 "driver_specific": { 00:11:36.917 "raid": { 00:11:36.917 "uuid": "18dd1080-edda-4bcb-a255-0949a81f6a81", 00:11:36.917 "strip_size_kb": 0, 00:11:36.917 "state": "online", 00:11:36.917 "raid_level": "raid1", 00:11:36.917 "superblock": true, 00:11:36.917 "num_base_bdevs": 3, 00:11:36.917 "num_base_bdevs_discovered": 3, 00:11:36.917 "num_base_bdevs_operational": 3, 00:11:36.917 "base_bdevs_list": [ 00:11:36.917 { 00:11:36.917 "name": "pt1", 00:11:36.917 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:36.917 "is_configured": true, 00:11:36.917 "data_offset": 2048, 00:11:36.917 "data_size": 63488 00:11:36.917 }, 00:11:36.917 { 00:11:36.917 "name": "pt2", 00:11:36.917 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.917 "is_configured": true, 00:11:36.917 "data_offset": 2048, 00:11:36.917 "data_size": 63488 00:11:36.917 }, 00:11:36.917 { 00:11:36.917 "name": "pt3", 00:11:36.917 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.917 "is_configured": true, 00:11:36.917 "data_offset": 2048, 00:11:36.917 "data_size": 63488 00:11:36.917 } 00:11:36.917 ] 00:11:36.917 } 00:11:36.917 } 00:11:36.917 }' 00:11:36.917 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:37.177 pt2 00:11:37.177 pt3' 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.177 [2024-12-06 04:02:30.477833] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 18dd1080-edda-4bcb-a255-0949a81f6a81 '!=' 18dd1080-edda-4bcb-a255-0949a81f6a81 ']' 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.177 [2024-12-06 04:02:30.521499] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.177 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.435 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.435 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.435 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.435 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.435 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.435 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.435 "name": "raid_bdev1", 00:11:37.435 "uuid": "18dd1080-edda-4bcb-a255-0949a81f6a81", 00:11:37.435 "strip_size_kb": 0, 00:11:37.435 "state": "online", 00:11:37.435 "raid_level": "raid1", 00:11:37.435 "superblock": true, 00:11:37.435 "num_base_bdevs": 3, 00:11:37.435 "num_base_bdevs_discovered": 2, 00:11:37.435 "num_base_bdevs_operational": 2, 00:11:37.435 "base_bdevs_list": [ 00:11:37.435 { 00:11:37.435 "name": null, 00:11:37.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.435 "is_configured": false, 00:11:37.435 "data_offset": 0, 00:11:37.435 "data_size": 63488 00:11:37.435 }, 00:11:37.435 { 00:11:37.435 "name": "pt2", 00:11:37.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.435 "is_configured": true, 00:11:37.435 "data_offset": 2048, 00:11:37.435 "data_size": 63488 00:11:37.435 }, 00:11:37.435 { 00:11:37.435 "name": "pt3", 00:11:37.435 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:37.435 "is_configured": true, 00:11:37.435 "data_offset": 2048, 00:11:37.435 "data_size": 63488 00:11:37.435 } 00:11:37.435 ] 00:11:37.435 }' 00:11:37.435 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.435 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.694 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:37.694 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.694 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.694 [2024-12-06 04:02:30.916786] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:37.694 [2024-12-06 04:02:30.916815] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.694 [2024-12-06 04:02:30.916886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.694 [2024-12-06 04:02:30.916943] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:37.694 [2024-12-06 04:02:30.916956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:37.694 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.694 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:37.694 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.694 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.694 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.694 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.694 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:37.694 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:37.694 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:37.694 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:37.694 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:37.694 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.694 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.694 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.694 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:37.694 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:37.694 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:37.694 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.694 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.695 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.695 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:37.695 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:37.695 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:37.695 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:37.695 04:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:37.695 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.695 04:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.695 [2024-12-06 04:02:31.004595] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:37.695 [2024-12-06 04:02:31.004651] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.695 [2024-12-06 04:02:31.004667] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:37.695 [2024-12-06 04:02:31.004677] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.695 [2024-12-06 04:02:31.006796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.695 [2024-12-06 04:02:31.006838] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:37.695 [2024-12-06 04:02:31.006912] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:37.695 [2024-12-06 04:02:31.006966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:37.695 pt2 00:11:37.695 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.695 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:37.695 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.695 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.695 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.695 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.695 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:37.695 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.695 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.695 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.695 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.695 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.695 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.695 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.695 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.695 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.952 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.952 "name": "raid_bdev1", 00:11:37.952 "uuid": "18dd1080-edda-4bcb-a255-0949a81f6a81", 00:11:37.952 "strip_size_kb": 0, 00:11:37.952 "state": "configuring", 00:11:37.952 "raid_level": "raid1", 00:11:37.952 "superblock": true, 00:11:37.952 "num_base_bdevs": 3, 00:11:37.952 "num_base_bdevs_discovered": 1, 00:11:37.952 "num_base_bdevs_operational": 2, 00:11:37.952 "base_bdevs_list": [ 00:11:37.952 { 00:11:37.952 "name": null, 00:11:37.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.952 "is_configured": false, 00:11:37.952 "data_offset": 2048, 00:11:37.952 "data_size": 63488 00:11:37.952 }, 00:11:37.952 { 00:11:37.952 "name": "pt2", 00:11:37.952 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.952 "is_configured": true, 00:11:37.952 "data_offset": 2048, 00:11:37.952 "data_size": 63488 00:11:37.952 }, 00:11:37.952 { 00:11:37.952 "name": null, 00:11:37.952 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:37.952 "is_configured": false, 00:11:37.952 "data_offset": 2048, 00:11:37.952 "data_size": 63488 00:11:37.952 } 00:11:37.952 ] 00:11:37.952 }' 00:11:37.952 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.952 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.212 [2024-12-06 04:02:31.395983] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:38.212 [2024-12-06 04:02:31.396124] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.212 [2024-12-06 04:02:31.396168] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:38.212 [2024-12-06 04:02:31.396203] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.212 [2024-12-06 04:02:31.396780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.212 [2024-12-06 04:02:31.396852] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:38.212 [2024-12-06 04:02:31.396989] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:38.212 [2024-12-06 04:02:31.397070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:38.212 [2024-12-06 04:02:31.397228] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:38.212 [2024-12-06 04:02:31.397273] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:38.212 [2024-12-06 04:02:31.397575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:38.212 [2024-12-06 04:02:31.397788] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:38.212 [2024-12-06 04:02:31.397830] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:38.212 [2024-12-06 04:02:31.398008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.212 pt3 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.212 "name": "raid_bdev1", 00:11:38.212 "uuid": "18dd1080-edda-4bcb-a255-0949a81f6a81", 00:11:38.212 "strip_size_kb": 0, 00:11:38.212 "state": "online", 00:11:38.212 "raid_level": "raid1", 00:11:38.212 "superblock": true, 00:11:38.212 "num_base_bdevs": 3, 00:11:38.212 "num_base_bdevs_discovered": 2, 00:11:38.212 "num_base_bdevs_operational": 2, 00:11:38.212 "base_bdevs_list": [ 00:11:38.212 { 00:11:38.212 "name": null, 00:11:38.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.212 "is_configured": false, 00:11:38.212 "data_offset": 2048, 00:11:38.212 "data_size": 63488 00:11:38.212 }, 00:11:38.212 { 00:11:38.212 "name": "pt2", 00:11:38.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.212 "is_configured": true, 00:11:38.212 "data_offset": 2048, 00:11:38.212 "data_size": 63488 00:11:38.212 }, 00:11:38.212 { 00:11:38.212 "name": "pt3", 00:11:38.212 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:38.212 "is_configured": true, 00:11:38.212 "data_offset": 2048, 00:11:38.212 "data_size": 63488 00:11:38.212 } 00:11:38.212 ] 00:11:38.212 }' 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.212 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.480 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:38.480 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.480 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.480 [2024-12-06 04:02:31.803248] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:38.480 [2024-12-06 04:02:31.803280] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:38.480 [2024-12-06 04:02:31.803350] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.480 [2024-12-06 04:02:31.803408] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.480 [2024-12-06 04:02:31.803418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:38.480 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.480 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:38.480 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.480 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.480 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.480 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.751 [2024-12-06 04:02:31.855171] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:38.751 [2024-12-06 04:02:31.855228] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.751 [2024-12-06 04:02:31.855246] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:38.751 [2024-12-06 04:02:31.855254] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.751 [2024-12-06 04:02:31.857345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.751 [2024-12-06 04:02:31.857382] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:38.751 [2024-12-06 04:02:31.857463] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:38.751 [2024-12-06 04:02:31.857514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:38.751 [2024-12-06 04:02:31.857637] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:38.751 [2024-12-06 04:02:31.857647] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:38.751 [2024-12-06 04:02:31.857662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:38.751 [2024-12-06 04:02:31.857733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:38.751 pt1 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.751 "name": "raid_bdev1", 00:11:38.751 "uuid": "18dd1080-edda-4bcb-a255-0949a81f6a81", 00:11:38.751 "strip_size_kb": 0, 00:11:38.751 "state": "configuring", 00:11:38.751 "raid_level": "raid1", 00:11:38.751 "superblock": true, 00:11:38.751 "num_base_bdevs": 3, 00:11:38.751 "num_base_bdevs_discovered": 1, 00:11:38.751 "num_base_bdevs_operational": 2, 00:11:38.751 "base_bdevs_list": [ 00:11:38.751 { 00:11:38.751 "name": null, 00:11:38.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.751 "is_configured": false, 00:11:38.751 "data_offset": 2048, 00:11:38.751 "data_size": 63488 00:11:38.751 }, 00:11:38.751 { 00:11:38.751 "name": "pt2", 00:11:38.751 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.751 "is_configured": true, 00:11:38.751 "data_offset": 2048, 00:11:38.751 "data_size": 63488 00:11:38.751 }, 00:11:38.751 { 00:11:38.751 "name": null, 00:11:38.751 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:38.751 "is_configured": false, 00:11:38.751 "data_offset": 2048, 00:11:38.751 "data_size": 63488 00:11:38.751 } 00:11:38.751 ] 00:11:38.751 }' 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.751 04:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.011 [2024-12-06 04:02:32.238535] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:39.011 [2024-12-06 04:02:32.238667] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.011 [2024-12-06 04:02:32.238713] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:39.011 [2024-12-06 04:02:32.238748] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.011 [2024-12-06 04:02:32.239279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.011 [2024-12-06 04:02:32.239341] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:39.011 [2024-12-06 04:02:32.239458] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:39.011 [2024-12-06 04:02:32.239514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:39.011 [2024-12-06 04:02:32.239671] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:39.011 [2024-12-06 04:02:32.239712] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:39.011 [2024-12-06 04:02:32.240002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:39.011 [2024-12-06 04:02:32.240226] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:39.011 [2024-12-06 04:02:32.240282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:39.011 [2024-12-06 04:02:32.240477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.011 pt3 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.011 "name": "raid_bdev1", 00:11:39.011 "uuid": "18dd1080-edda-4bcb-a255-0949a81f6a81", 00:11:39.011 "strip_size_kb": 0, 00:11:39.011 "state": "online", 00:11:39.011 "raid_level": "raid1", 00:11:39.011 "superblock": true, 00:11:39.011 "num_base_bdevs": 3, 00:11:39.011 "num_base_bdevs_discovered": 2, 00:11:39.011 "num_base_bdevs_operational": 2, 00:11:39.011 "base_bdevs_list": [ 00:11:39.011 { 00:11:39.011 "name": null, 00:11:39.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.011 "is_configured": false, 00:11:39.011 "data_offset": 2048, 00:11:39.011 "data_size": 63488 00:11:39.011 }, 00:11:39.011 { 00:11:39.011 "name": "pt2", 00:11:39.011 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.011 "is_configured": true, 00:11:39.011 "data_offset": 2048, 00:11:39.011 "data_size": 63488 00:11:39.011 }, 00:11:39.011 { 00:11:39.011 "name": "pt3", 00:11:39.011 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:39.011 "is_configured": true, 00:11:39.011 "data_offset": 2048, 00:11:39.011 "data_size": 63488 00:11:39.011 } 00:11:39.011 ] 00:11:39.011 }' 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.011 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.580 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:39.580 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:39.580 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.580 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.580 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.580 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:39.580 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:39.580 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.580 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.580 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:39.580 [2024-12-06 04:02:32.737927] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:39.580 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.580 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 18dd1080-edda-4bcb-a255-0949a81f6a81 '!=' 18dd1080-edda-4bcb-a255-0949a81f6a81 ']' 00:11:39.580 04:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68728 00:11:39.580 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68728 ']' 00:11:39.580 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68728 00:11:39.580 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:39.580 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:39.580 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68728 00:11:39.580 killing process with pid 68728 00:11:39.580 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:39.580 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:39.580 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68728' 00:11:39.580 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68728 00:11:39.580 [2024-12-06 04:02:32.785203] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:39.580 [2024-12-06 04:02:32.785283] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:39.580 [2024-12-06 04:02:32.785342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:39.580 [2024-12-06 04:02:32.785353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:39.580 04:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68728 00:11:39.840 [2024-12-06 04:02:33.094064] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:41.216 04:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:41.216 00:11:41.216 real 0m7.377s 00:11:41.216 user 0m11.484s 00:11:41.216 sys 0m1.184s 00:11:41.216 04:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.216 04:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.216 ************************************ 00:11:41.216 END TEST raid_superblock_test 00:11:41.216 ************************************ 00:11:41.216 04:02:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:11:41.216 04:02:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:41.216 04:02:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.216 04:02:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:41.216 ************************************ 00:11:41.216 START TEST raid_read_error_test 00:11:41.216 ************************************ 00:11:41.216 04:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:11:41.216 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:41.216 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:41.216 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:41.216 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:41.216 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:41.216 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fmoaFTRRrI 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69168 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69168 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69168 ']' 00:11:41.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.217 04:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.217 [2024-12-06 04:02:34.411432] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:11:41.217 [2024-12-06 04:02:34.411565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69168 ] 00:11:41.476 [2024-12-06 04:02:34.590320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.476 [2024-12-06 04:02:34.712981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.735 [2024-12-06 04:02:34.917995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.735 [2024-12-06 04:02:34.918075] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.992 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.992 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:41.992 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:41.992 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:41.992 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.992 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.992 BaseBdev1_malloc 00:11:41.992 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.992 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:41.992 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.992 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.992 true 00:11:41.992 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.992 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:41.992 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.992 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.993 [2024-12-06 04:02:35.340721] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:41.993 [2024-12-06 04:02:35.340784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.993 [2024-12-06 04:02:35.340807] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:41.993 [2024-12-06 04:02:35.340818] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.993 [2024-12-06 04:02:35.343133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.993 [2024-12-06 04:02:35.343177] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:42.250 BaseBdev1 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.250 BaseBdev2_malloc 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.250 true 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.250 [2024-12-06 04:02:35.409666] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:42.250 [2024-12-06 04:02:35.409730] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.250 [2024-12-06 04:02:35.409750] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:42.250 [2024-12-06 04:02:35.409761] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.250 [2024-12-06 04:02:35.412032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.250 [2024-12-06 04:02:35.412086] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:42.250 BaseBdev2 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.250 BaseBdev3_malloc 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.250 true 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.250 [2024-12-06 04:02:35.491258] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:42.250 [2024-12-06 04:02:35.491326] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.250 [2024-12-06 04:02:35.491351] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:42.250 [2024-12-06 04:02:35.491363] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.250 [2024-12-06 04:02:35.493750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.250 [2024-12-06 04:02:35.493798] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:42.250 BaseBdev3 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.250 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.250 [2024-12-06 04:02:35.503316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:42.250 [2024-12-06 04:02:35.505446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:42.250 [2024-12-06 04:02:35.505544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:42.250 [2024-12-06 04:02:35.505786] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:42.251 [2024-12-06 04:02:35.505806] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:42.251 [2024-12-06 04:02:35.506133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:42.251 [2024-12-06 04:02:35.506352] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:42.251 [2024-12-06 04:02:35.506374] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:42.251 [2024-12-06 04:02:35.506577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.251 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.251 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:42.251 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.251 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.251 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.251 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.251 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.251 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.251 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.251 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.251 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.251 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.251 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.251 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.251 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.251 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.251 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.251 "name": "raid_bdev1", 00:11:42.251 "uuid": "39729e66-cebd-4b12-b1b7-ac655cf61298", 00:11:42.251 "strip_size_kb": 0, 00:11:42.251 "state": "online", 00:11:42.251 "raid_level": "raid1", 00:11:42.251 "superblock": true, 00:11:42.251 "num_base_bdevs": 3, 00:11:42.251 "num_base_bdevs_discovered": 3, 00:11:42.251 "num_base_bdevs_operational": 3, 00:11:42.251 "base_bdevs_list": [ 00:11:42.251 { 00:11:42.251 "name": "BaseBdev1", 00:11:42.251 "uuid": "4fa37e03-025a-54b8-b145-9f9553a08992", 00:11:42.251 "is_configured": true, 00:11:42.251 "data_offset": 2048, 00:11:42.251 "data_size": 63488 00:11:42.251 }, 00:11:42.251 { 00:11:42.251 "name": "BaseBdev2", 00:11:42.251 "uuid": "80654e59-2db9-502d-827c-e5f28481123e", 00:11:42.251 "is_configured": true, 00:11:42.251 "data_offset": 2048, 00:11:42.251 "data_size": 63488 00:11:42.251 }, 00:11:42.251 { 00:11:42.251 "name": "BaseBdev3", 00:11:42.251 "uuid": "1e98969f-a35b-521c-82f4-9fc4068c4baf", 00:11:42.251 "is_configured": true, 00:11:42.251 "data_offset": 2048, 00:11:42.251 "data_size": 63488 00:11:42.251 } 00:11:42.251 ] 00:11:42.251 }' 00:11:42.251 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.251 04:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.816 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:42.816 04:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:42.816 [2024-12-06 04:02:36.027996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:43.753 04:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:43.753 04:02:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.753 04:02:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.753 04:02:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.753 04:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:43.753 04:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:43.753 04:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:43.753 04:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:43.753 04:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:43.753 04:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.753 04:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.753 04:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.753 04:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.753 04:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:43.753 04:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.753 04:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.753 04:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.753 04:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.753 04:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.753 04:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.753 04:02:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.753 04:02:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.753 04:02:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.753 04:02:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.753 "name": "raid_bdev1", 00:11:43.753 "uuid": "39729e66-cebd-4b12-b1b7-ac655cf61298", 00:11:43.753 "strip_size_kb": 0, 00:11:43.753 "state": "online", 00:11:43.753 "raid_level": "raid1", 00:11:43.753 "superblock": true, 00:11:43.753 "num_base_bdevs": 3, 00:11:43.753 "num_base_bdevs_discovered": 3, 00:11:43.753 "num_base_bdevs_operational": 3, 00:11:43.753 "base_bdevs_list": [ 00:11:43.753 { 00:11:43.753 "name": "BaseBdev1", 00:11:43.753 "uuid": "4fa37e03-025a-54b8-b145-9f9553a08992", 00:11:43.753 "is_configured": true, 00:11:43.753 "data_offset": 2048, 00:11:43.753 "data_size": 63488 00:11:43.753 }, 00:11:43.753 { 00:11:43.753 "name": "BaseBdev2", 00:11:43.753 "uuid": "80654e59-2db9-502d-827c-e5f28481123e", 00:11:43.753 "is_configured": true, 00:11:43.753 "data_offset": 2048, 00:11:43.753 "data_size": 63488 00:11:43.753 }, 00:11:43.753 { 00:11:43.753 "name": "BaseBdev3", 00:11:43.753 "uuid": "1e98969f-a35b-521c-82f4-9fc4068c4baf", 00:11:43.753 "is_configured": true, 00:11:43.753 "data_offset": 2048, 00:11:43.753 "data_size": 63488 00:11:43.753 } 00:11:43.753 ] 00:11:43.753 }' 00:11:43.753 04:02:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.753 04:02:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.373 04:02:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:44.373 04:02:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.373 04:02:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.373 [2024-12-06 04:02:37.384731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:44.373 [2024-12-06 04:02:37.384850] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:44.373 [2024-12-06 04:02:37.388098] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:44.373 [2024-12-06 04:02:37.388153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.373 [2024-12-06 04:02:37.388271] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:44.373 [2024-12-06 04:02:37.388282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:44.373 { 00:11:44.373 "results": [ 00:11:44.373 { 00:11:44.373 "job": "raid_bdev1", 00:11:44.373 "core_mask": "0x1", 00:11:44.373 "workload": "randrw", 00:11:44.373 "percentage": 50, 00:11:44.373 "status": "finished", 00:11:44.373 "queue_depth": 1, 00:11:44.373 "io_size": 131072, 00:11:44.373 "runtime": 1.357504, 00:11:44.373 "iops": 12655.579652067323, 00:11:44.374 "mibps": 1581.9474565084154, 00:11:44.374 "io_failed": 0, 00:11:44.374 "io_timeout": 0, 00:11:44.374 "avg_latency_us": 76.19355663892716, 00:11:44.374 "min_latency_us": 24.817467248908297, 00:11:44.374 "max_latency_us": 1638.4 00:11:44.374 } 00:11:44.374 ], 00:11:44.374 "core_count": 1 00:11:44.374 } 00:11:44.374 04:02:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.374 04:02:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69168 00:11:44.374 04:02:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69168 ']' 00:11:44.374 04:02:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69168 00:11:44.374 04:02:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:44.374 04:02:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.374 04:02:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69168 00:11:44.374 killing process with pid 69168 00:11:44.374 04:02:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:44.374 04:02:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:44.374 04:02:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69168' 00:11:44.374 04:02:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69168 00:11:44.374 [2024-12-06 04:02:37.432250] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:44.374 04:02:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69168 00:11:44.374 [2024-12-06 04:02:37.667311] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:45.755 04:02:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fmoaFTRRrI 00:11:45.755 04:02:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:45.755 04:02:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:45.755 ************************************ 00:11:45.755 END TEST raid_read_error_test 00:11:45.755 ************************************ 00:11:45.755 04:02:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:45.756 04:02:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:45.756 04:02:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:45.756 04:02:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:45.756 04:02:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:45.756 00:11:45.756 real 0m4.605s 00:11:45.756 user 0m5.475s 00:11:45.756 sys 0m0.534s 00:11:45.756 04:02:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.756 04:02:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.756 04:02:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:11:45.756 04:02:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:45.756 04:02:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.756 04:02:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:45.756 ************************************ 00:11:45.756 START TEST raid_write_error_test 00:11:45.756 ************************************ 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.atOyxD2iPx 00:11:45.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69314 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69314 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69314 ']' 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.756 04:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:45.756 [2024-12-06 04:02:39.068699] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:11:45.756 [2024-12-06 04:02:39.068904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69314 ] 00:11:46.015 [2024-12-06 04:02:39.228774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.015 [2024-12-06 04:02:39.344496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.274 [2024-12-06 04:02:39.568070] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.274 [2024-12-06 04:02:39.568226] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.843 04:02:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.843 04:02:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:46.843 04:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.843 04:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:46.843 04:02:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.843 04:02:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.843 BaseBdev1_malloc 00:11:46.843 04:02:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.843 04:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:46.843 04:02:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.843 04:02:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.843 true 00:11:46.843 04:02:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.843 04:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:46.843 04:02:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.843 04:02:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.843 [2024-12-06 04:02:39.970809] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:46.843 [2024-12-06 04:02:39.970869] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.843 [2024-12-06 04:02:39.970908] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:46.843 [2024-12-06 04:02:39.970920] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.843 [2024-12-06 04:02:39.973319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.843 [2024-12-06 04:02:39.973368] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:46.843 BaseBdev1 00:11:46.843 04:02:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.843 04:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.843 04:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:46.843 04:02:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.843 04:02:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.843 BaseBdev2_malloc 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.843 true 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.843 [2024-12-06 04:02:40.026630] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:46.843 [2024-12-06 04:02:40.026688] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.843 [2024-12-06 04:02:40.026704] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:46.843 [2024-12-06 04:02:40.026714] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.843 [2024-12-06 04:02:40.028949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.843 [2024-12-06 04:02:40.028995] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:46.843 BaseBdev2 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.843 BaseBdev3_malloc 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.843 true 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.843 [2024-12-06 04:02:40.100617] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:46.843 [2024-12-06 04:02:40.100671] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.843 [2024-12-06 04:02:40.100688] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:46.843 [2024-12-06 04:02:40.100699] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.843 [2024-12-06 04:02:40.103097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.843 [2024-12-06 04:02:40.103186] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:46.843 BaseBdev3 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.843 04:02:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.843 [2024-12-06 04:02:40.108688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.843 [2024-12-06 04:02:40.110622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.843 [2024-12-06 04:02:40.110758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:46.843 [2024-12-06 04:02:40.111093] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:46.844 [2024-12-06 04:02:40.111115] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:46.844 [2024-12-06 04:02:40.111416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:46.844 [2024-12-06 04:02:40.111607] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:46.844 [2024-12-06 04:02:40.111619] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:46.844 [2024-12-06 04:02:40.111792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.844 04:02:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.844 04:02:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:46.844 04:02:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.844 04:02:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.844 04:02:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.844 04:02:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.844 04:02:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.844 04:02:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.844 04:02:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.844 04:02:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.844 04:02:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.844 04:02:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.844 04:02:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.844 04:02:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.844 04:02:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.844 04:02:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.844 04:02:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.844 "name": "raid_bdev1", 00:11:46.844 "uuid": "51905477-8079-401b-83eb-cfc1a8eef2ed", 00:11:46.844 "strip_size_kb": 0, 00:11:46.844 "state": "online", 00:11:46.844 "raid_level": "raid1", 00:11:46.844 "superblock": true, 00:11:46.844 "num_base_bdevs": 3, 00:11:46.844 "num_base_bdevs_discovered": 3, 00:11:46.844 "num_base_bdevs_operational": 3, 00:11:46.844 "base_bdevs_list": [ 00:11:46.844 { 00:11:46.844 "name": "BaseBdev1", 00:11:46.844 "uuid": "ac17df69-b7aa-5e64-b08e-3b8ba9242342", 00:11:46.844 "is_configured": true, 00:11:46.844 "data_offset": 2048, 00:11:46.844 "data_size": 63488 00:11:46.844 }, 00:11:46.844 { 00:11:46.844 "name": "BaseBdev2", 00:11:46.844 "uuid": "152ecdc0-983b-5833-b516-2679af289906", 00:11:46.844 "is_configured": true, 00:11:46.844 "data_offset": 2048, 00:11:46.844 "data_size": 63488 00:11:46.844 }, 00:11:46.844 { 00:11:46.844 "name": "BaseBdev3", 00:11:46.844 "uuid": "86b1d75a-4938-5be6-96b3-0ba14a11763a", 00:11:46.844 "is_configured": true, 00:11:46.844 "data_offset": 2048, 00:11:46.844 "data_size": 63488 00:11:46.844 } 00:11:46.844 ] 00:11:46.844 }' 00:11:46.844 04:02:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.844 04:02:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.411 04:02:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:47.411 04:02:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:47.411 [2024-12-06 04:02:40.657306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.350 [2024-12-06 04:02:41.556834] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:48.350 [2024-12-06 04:02:41.557002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:48.350 [2024-12-06 04:02:41.557305] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.350 "name": "raid_bdev1", 00:11:48.350 "uuid": "51905477-8079-401b-83eb-cfc1a8eef2ed", 00:11:48.350 "strip_size_kb": 0, 00:11:48.350 "state": "online", 00:11:48.350 "raid_level": "raid1", 00:11:48.350 "superblock": true, 00:11:48.350 "num_base_bdevs": 3, 00:11:48.350 "num_base_bdevs_discovered": 2, 00:11:48.350 "num_base_bdevs_operational": 2, 00:11:48.350 "base_bdevs_list": [ 00:11:48.350 { 00:11:48.350 "name": null, 00:11:48.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.350 "is_configured": false, 00:11:48.350 "data_offset": 0, 00:11:48.350 "data_size": 63488 00:11:48.350 }, 00:11:48.350 { 00:11:48.350 "name": "BaseBdev2", 00:11:48.350 "uuid": "152ecdc0-983b-5833-b516-2679af289906", 00:11:48.350 "is_configured": true, 00:11:48.350 "data_offset": 2048, 00:11:48.350 "data_size": 63488 00:11:48.350 }, 00:11:48.350 { 00:11:48.350 "name": "BaseBdev3", 00:11:48.350 "uuid": "86b1d75a-4938-5be6-96b3-0ba14a11763a", 00:11:48.350 "is_configured": true, 00:11:48.350 "data_offset": 2048, 00:11:48.350 "data_size": 63488 00:11:48.350 } 00:11:48.350 ] 00:11:48.350 }' 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.350 04:02:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.945 04:02:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:48.945 04:02:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.945 04:02:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.945 [2024-12-06 04:02:42.011245] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.945 [2024-12-06 04:02:42.011283] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.945 [2024-12-06 04:02:42.014336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.945 [2024-12-06 04:02:42.014403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.945 [2024-12-06 04:02:42.014507] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.945 [2024-12-06 04:02:42.014524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:48.945 { 00:11:48.945 "results": [ 00:11:48.945 { 00:11:48.945 "job": "raid_bdev1", 00:11:48.945 "core_mask": "0x1", 00:11:48.945 "workload": "randrw", 00:11:48.945 "percentage": 50, 00:11:48.945 "status": "finished", 00:11:48.945 "queue_depth": 1, 00:11:48.945 "io_size": 131072, 00:11:48.945 "runtime": 1.354636, 00:11:48.945 "iops": 13948.396469605119, 00:11:48.945 "mibps": 1743.5495587006399, 00:11:48.945 "io_failed": 0, 00:11:48.945 "io_timeout": 0, 00:11:48.945 "avg_latency_us": 68.77969750089844, 00:11:48.945 "min_latency_us": 24.705676855895195, 00:11:48.945 "max_latency_us": 1681.3275109170306 00:11:48.945 } 00:11:48.945 ], 00:11:48.945 "core_count": 1 00:11:48.945 } 00:11:48.945 04:02:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.945 04:02:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69314 00:11:48.945 04:02:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69314 ']' 00:11:48.945 04:02:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69314 00:11:48.945 04:02:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:48.945 04:02:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.945 04:02:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69314 00:11:48.945 04:02:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:48.945 04:02:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:48.945 04:02:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69314' 00:11:48.945 killing process with pid 69314 00:11:48.945 04:02:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69314 00:11:48.945 [2024-12-06 04:02:42.046716] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:48.945 04:02:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69314 00:11:49.216 [2024-12-06 04:02:42.296669] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.595 04:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:50.595 04:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.atOyxD2iPx 00:11:50.595 04:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:50.595 04:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:50.595 04:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:50.595 04:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:50.595 04:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:50.595 04:02:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:50.595 00:11:50.595 real 0m4.602s 00:11:50.595 user 0m5.445s 00:11:50.595 sys 0m0.532s 00:11:50.595 04:02:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.595 ************************************ 00:11:50.595 END TEST raid_write_error_test 00:11:50.595 ************************************ 00:11:50.595 04:02:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.595 04:02:43 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:50.595 04:02:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:50.595 04:02:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:11:50.595 04:02:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:50.595 04:02:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.595 04:02:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.595 ************************************ 00:11:50.595 START TEST raid_state_function_test 00:11:50.595 ************************************ 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69457 00:11:50.595 Process raid pid: 69457 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69457' 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69457 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69457 ']' 00:11:50.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.595 04:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.595 [2024-12-06 04:02:43.715437] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:11:50.595 [2024-12-06 04:02:43.715619] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.595 [2024-12-06 04:02:43.893147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.855 [2024-12-06 04:02:44.014816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.114 [2024-12-06 04:02:44.231934] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.114 [2024-12-06 04:02:44.232098] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.374 04:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.374 04:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:51.374 04:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:51.374 04:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.374 04:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.374 [2024-12-06 04:02:44.591876] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:51.374 [2024-12-06 04:02:44.591996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:51.374 [2024-12-06 04:02:44.592012] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:51.374 [2024-12-06 04:02:44.592024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:51.374 [2024-12-06 04:02:44.592032] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:51.374 [2024-12-06 04:02:44.592042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:51.374 [2024-12-06 04:02:44.592068] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:51.374 [2024-12-06 04:02:44.592079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:51.374 04:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.374 04:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:51.374 04:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.374 04:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.374 04:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:51.374 04:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.374 04:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.374 04:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.374 04:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.374 04:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.374 04:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.374 04:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.374 04:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.374 04:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.374 04:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.374 04:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.374 04:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.375 "name": "Existed_Raid", 00:11:51.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.375 "strip_size_kb": 64, 00:11:51.375 "state": "configuring", 00:11:51.375 "raid_level": "raid0", 00:11:51.375 "superblock": false, 00:11:51.375 "num_base_bdevs": 4, 00:11:51.375 "num_base_bdevs_discovered": 0, 00:11:51.375 "num_base_bdevs_operational": 4, 00:11:51.375 "base_bdevs_list": [ 00:11:51.375 { 00:11:51.375 "name": "BaseBdev1", 00:11:51.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.375 "is_configured": false, 00:11:51.375 "data_offset": 0, 00:11:51.375 "data_size": 0 00:11:51.375 }, 00:11:51.375 { 00:11:51.375 "name": "BaseBdev2", 00:11:51.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.375 "is_configured": false, 00:11:51.375 "data_offset": 0, 00:11:51.375 "data_size": 0 00:11:51.375 }, 00:11:51.375 { 00:11:51.375 "name": "BaseBdev3", 00:11:51.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.375 "is_configured": false, 00:11:51.375 "data_offset": 0, 00:11:51.375 "data_size": 0 00:11:51.375 }, 00:11:51.375 { 00:11:51.375 "name": "BaseBdev4", 00:11:51.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.375 "is_configured": false, 00:11:51.375 "data_offset": 0, 00:11:51.375 "data_size": 0 00:11:51.375 } 00:11:51.375 ] 00:11:51.375 }' 00:11:51.375 04:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.375 04:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.966 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:51.966 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.966 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.966 [2024-12-06 04:02:45.067035] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:51.966 [2024-12-06 04:02:45.067165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:51.966 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.966 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:51.966 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.966 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.966 [2024-12-06 04:02:45.075015] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:51.966 [2024-12-06 04:02:45.075156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:51.966 [2024-12-06 04:02:45.075191] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:51.966 [2024-12-06 04:02:45.075217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:51.966 [2024-12-06 04:02:45.075239] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:51.966 [2024-12-06 04:02:45.075271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:51.966 [2024-12-06 04:02:45.075308] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:51.966 [2024-12-06 04:02:45.075339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:51.966 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.966 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:51.966 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.966 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.966 [2024-12-06 04:02:45.119862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.966 BaseBdev1 00:11:51.966 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.966 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:51.966 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:51.966 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.967 [ 00:11:51.967 { 00:11:51.967 "name": "BaseBdev1", 00:11:51.967 "aliases": [ 00:11:51.967 "1015b509-df04-434e-bed8-3de4303e1ffd" 00:11:51.967 ], 00:11:51.967 "product_name": "Malloc disk", 00:11:51.967 "block_size": 512, 00:11:51.967 "num_blocks": 65536, 00:11:51.967 "uuid": "1015b509-df04-434e-bed8-3de4303e1ffd", 00:11:51.967 "assigned_rate_limits": { 00:11:51.967 "rw_ios_per_sec": 0, 00:11:51.967 "rw_mbytes_per_sec": 0, 00:11:51.967 "r_mbytes_per_sec": 0, 00:11:51.967 "w_mbytes_per_sec": 0 00:11:51.967 }, 00:11:51.967 "claimed": true, 00:11:51.967 "claim_type": "exclusive_write", 00:11:51.967 "zoned": false, 00:11:51.967 "supported_io_types": { 00:11:51.967 "read": true, 00:11:51.967 "write": true, 00:11:51.967 "unmap": true, 00:11:51.967 "flush": true, 00:11:51.967 "reset": true, 00:11:51.967 "nvme_admin": false, 00:11:51.967 "nvme_io": false, 00:11:51.967 "nvme_io_md": false, 00:11:51.967 "write_zeroes": true, 00:11:51.967 "zcopy": true, 00:11:51.967 "get_zone_info": false, 00:11:51.967 "zone_management": false, 00:11:51.967 "zone_append": false, 00:11:51.967 "compare": false, 00:11:51.967 "compare_and_write": false, 00:11:51.967 "abort": true, 00:11:51.967 "seek_hole": false, 00:11:51.967 "seek_data": false, 00:11:51.967 "copy": true, 00:11:51.967 "nvme_iov_md": false 00:11:51.967 }, 00:11:51.967 "memory_domains": [ 00:11:51.967 { 00:11:51.967 "dma_device_id": "system", 00:11:51.967 "dma_device_type": 1 00:11:51.967 }, 00:11:51.967 { 00:11:51.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.967 "dma_device_type": 2 00:11:51.967 } 00:11:51.967 ], 00:11:51.967 "driver_specific": {} 00:11:51.967 } 00:11:51.967 ] 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.967 "name": "Existed_Raid", 00:11:51.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.967 "strip_size_kb": 64, 00:11:51.967 "state": "configuring", 00:11:51.967 "raid_level": "raid0", 00:11:51.967 "superblock": false, 00:11:51.967 "num_base_bdevs": 4, 00:11:51.967 "num_base_bdevs_discovered": 1, 00:11:51.967 "num_base_bdevs_operational": 4, 00:11:51.967 "base_bdevs_list": [ 00:11:51.967 { 00:11:51.967 "name": "BaseBdev1", 00:11:51.967 "uuid": "1015b509-df04-434e-bed8-3de4303e1ffd", 00:11:51.967 "is_configured": true, 00:11:51.967 "data_offset": 0, 00:11:51.967 "data_size": 65536 00:11:51.967 }, 00:11:51.967 { 00:11:51.967 "name": "BaseBdev2", 00:11:51.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.967 "is_configured": false, 00:11:51.967 "data_offset": 0, 00:11:51.967 "data_size": 0 00:11:51.967 }, 00:11:51.967 { 00:11:51.967 "name": "BaseBdev3", 00:11:51.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.967 "is_configured": false, 00:11:51.967 "data_offset": 0, 00:11:51.967 "data_size": 0 00:11:51.967 }, 00:11:51.967 { 00:11:51.967 "name": "BaseBdev4", 00:11:51.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.967 "is_configured": false, 00:11:51.967 "data_offset": 0, 00:11:51.967 "data_size": 0 00:11:51.967 } 00:11:51.967 ] 00:11:51.967 }' 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.967 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.536 [2024-12-06 04:02:45.607103] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:52.536 [2024-12-06 04:02:45.607250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.536 [2024-12-06 04:02:45.619157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.536 [2024-12-06 04:02:45.621210] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:52.536 [2024-12-06 04:02:45.621320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:52.536 [2024-12-06 04:02:45.621356] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:52.536 [2024-12-06 04:02:45.621385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:52.536 [2024-12-06 04:02:45.621410] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:52.536 [2024-12-06 04:02:45.621445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.536 "name": "Existed_Raid", 00:11:52.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.536 "strip_size_kb": 64, 00:11:52.536 "state": "configuring", 00:11:52.536 "raid_level": "raid0", 00:11:52.536 "superblock": false, 00:11:52.536 "num_base_bdevs": 4, 00:11:52.536 "num_base_bdevs_discovered": 1, 00:11:52.536 "num_base_bdevs_operational": 4, 00:11:52.536 "base_bdevs_list": [ 00:11:52.536 { 00:11:52.536 "name": "BaseBdev1", 00:11:52.536 "uuid": "1015b509-df04-434e-bed8-3de4303e1ffd", 00:11:52.536 "is_configured": true, 00:11:52.536 "data_offset": 0, 00:11:52.536 "data_size": 65536 00:11:52.536 }, 00:11:52.536 { 00:11:52.536 "name": "BaseBdev2", 00:11:52.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.536 "is_configured": false, 00:11:52.536 "data_offset": 0, 00:11:52.536 "data_size": 0 00:11:52.536 }, 00:11:52.536 { 00:11:52.536 "name": "BaseBdev3", 00:11:52.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.536 "is_configured": false, 00:11:52.536 "data_offset": 0, 00:11:52.536 "data_size": 0 00:11:52.536 }, 00:11:52.536 { 00:11:52.536 "name": "BaseBdev4", 00:11:52.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.536 "is_configured": false, 00:11:52.536 "data_offset": 0, 00:11:52.536 "data_size": 0 00:11:52.536 } 00:11:52.536 ] 00:11:52.536 }' 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.536 04:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.795 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:52.795 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.795 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.795 [2024-12-06 04:02:46.131895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.795 BaseBdev2 00:11:52.795 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.795 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:52.795 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:52.795 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:52.795 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:52.795 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:52.795 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:52.795 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:52.795 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.795 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.795 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.795 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:52.795 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.795 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.053 [ 00:11:53.053 { 00:11:53.053 "name": "BaseBdev2", 00:11:53.053 "aliases": [ 00:11:53.053 "76cba482-e5aa-43bb-9bd1-48e57d07a869" 00:11:53.053 ], 00:11:53.053 "product_name": "Malloc disk", 00:11:53.053 "block_size": 512, 00:11:53.053 "num_blocks": 65536, 00:11:53.053 "uuid": "76cba482-e5aa-43bb-9bd1-48e57d07a869", 00:11:53.053 "assigned_rate_limits": { 00:11:53.053 "rw_ios_per_sec": 0, 00:11:53.053 "rw_mbytes_per_sec": 0, 00:11:53.053 "r_mbytes_per_sec": 0, 00:11:53.053 "w_mbytes_per_sec": 0 00:11:53.053 }, 00:11:53.053 "claimed": true, 00:11:53.053 "claim_type": "exclusive_write", 00:11:53.053 "zoned": false, 00:11:53.053 "supported_io_types": { 00:11:53.053 "read": true, 00:11:53.053 "write": true, 00:11:53.053 "unmap": true, 00:11:53.053 "flush": true, 00:11:53.053 "reset": true, 00:11:53.053 "nvme_admin": false, 00:11:53.053 "nvme_io": false, 00:11:53.053 "nvme_io_md": false, 00:11:53.053 "write_zeroes": true, 00:11:53.053 "zcopy": true, 00:11:53.053 "get_zone_info": false, 00:11:53.053 "zone_management": false, 00:11:53.053 "zone_append": false, 00:11:53.053 "compare": false, 00:11:53.053 "compare_and_write": false, 00:11:53.053 "abort": true, 00:11:53.053 "seek_hole": false, 00:11:53.053 "seek_data": false, 00:11:53.053 "copy": true, 00:11:53.053 "nvme_iov_md": false 00:11:53.053 }, 00:11:53.053 "memory_domains": [ 00:11:53.053 { 00:11:53.053 "dma_device_id": "system", 00:11:53.054 "dma_device_type": 1 00:11:53.054 }, 00:11:53.054 { 00:11:53.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.054 "dma_device_type": 2 00:11:53.054 } 00:11:53.054 ], 00:11:53.054 "driver_specific": {} 00:11:53.054 } 00:11:53.054 ] 00:11:53.054 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.054 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:53.054 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:53.054 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:53.054 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:53.054 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.054 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.054 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:53.054 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.054 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.054 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.054 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.054 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.054 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.054 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.054 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.054 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.054 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.054 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.054 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.054 "name": "Existed_Raid", 00:11:53.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.054 "strip_size_kb": 64, 00:11:53.054 "state": "configuring", 00:11:53.054 "raid_level": "raid0", 00:11:53.054 "superblock": false, 00:11:53.054 "num_base_bdevs": 4, 00:11:53.054 "num_base_bdevs_discovered": 2, 00:11:53.054 "num_base_bdevs_operational": 4, 00:11:53.054 "base_bdevs_list": [ 00:11:53.054 { 00:11:53.054 "name": "BaseBdev1", 00:11:53.054 "uuid": "1015b509-df04-434e-bed8-3de4303e1ffd", 00:11:53.054 "is_configured": true, 00:11:53.054 "data_offset": 0, 00:11:53.054 "data_size": 65536 00:11:53.054 }, 00:11:53.054 { 00:11:53.054 "name": "BaseBdev2", 00:11:53.054 "uuid": "76cba482-e5aa-43bb-9bd1-48e57d07a869", 00:11:53.054 "is_configured": true, 00:11:53.054 "data_offset": 0, 00:11:53.054 "data_size": 65536 00:11:53.054 }, 00:11:53.054 { 00:11:53.054 "name": "BaseBdev3", 00:11:53.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.054 "is_configured": false, 00:11:53.054 "data_offset": 0, 00:11:53.054 "data_size": 0 00:11:53.054 }, 00:11:53.054 { 00:11:53.054 "name": "BaseBdev4", 00:11:53.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.054 "is_configured": false, 00:11:53.054 "data_offset": 0, 00:11:53.054 "data_size": 0 00:11:53.054 } 00:11:53.054 ] 00:11:53.054 }' 00:11:53.054 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.054 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.313 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:53.313 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.313 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.573 [2024-12-06 04:02:46.672348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:53.573 BaseBdev3 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.573 [ 00:11:53.573 { 00:11:53.573 "name": "BaseBdev3", 00:11:53.573 "aliases": [ 00:11:53.573 "622f8d90-8b20-4799-9ff6-488b07d73b1a" 00:11:53.573 ], 00:11:53.573 "product_name": "Malloc disk", 00:11:53.573 "block_size": 512, 00:11:53.573 "num_blocks": 65536, 00:11:53.573 "uuid": "622f8d90-8b20-4799-9ff6-488b07d73b1a", 00:11:53.573 "assigned_rate_limits": { 00:11:53.573 "rw_ios_per_sec": 0, 00:11:53.573 "rw_mbytes_per_sec": 0, 00:11:53.573 "r_mbytes_per_sec": 0, 00:11:53.573 "w_mbytes_per_sec": 0 00:11:53.573 }, 00:11:53.573 "claimed": true, 00:11:53.573 "claim_type": "exclusive_write", 00:11:53.573 "zoned": false, 00:11:53.573 "supported_io_types": { 00:11:53.573 "read": true, 00:11:53.573 "write": true, 00:11:53.573 "unmap": true, 00:11:53.573 "flush": true, 00:11:53.573 "reset": true, 00:11:53.573 "nvme_admin": false, 00:11:53.573 "nvme_io": false, 00:11:53.573 "nvme_io_md": false, 00:11:53.573 "write_zeroes": true, 00:11:53.573 "zcopy": true, 00:11:53.573 "get_zone_info": false, 00:11:53.573 "zone_management": false, 00:11:53.573 "zone_append": false, 00:11:53.573 "compare": false, 00:11:53.573 "compare_and_write": false, 00:11:53.573 "abort": true, 00:11:53.573 "seek_hole": false, 00:11:53.573 "seek_data": false, 00:11:53.573 "copy": true, 00:11:53.573 "nvme_iov_md": false 00:11:53.573 }, 00:11:53.573 "memory_domains": [ 00:11:53.573 { 00:11:53.573 "dma_device_id": "system", 00:11:53.573 "dma_device_type": 1 00:11:53.573 }, 00:11:53.573 { 00:11:53.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.573 "dma_device_type": 2 00:11:53.573 } 00:11:53.573 ], 00:11:53.573 "driver_specific": {} 00:11:53.573 } 00:11:53.573 ] 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.573 "name": "Existed_Raid", 00:11:53.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.573 "strip_size_kb": 64, 00:11:53.573 "state": "configuring", 00:11:53.573 "raid_level": "raid0", 00:11:53.573 "superblock": false, 00:11:53.573 "num_base_bdevs": 4, 00:11:53.573 "num_base_bdevs_discovered": 3, 00:11:53.573 "num_base_bdevs_operational": 4, 00:11:53.573 "base_bdevs_list": [ 00:11:53.573 { 00:11:53.573 "name": "BaseBdev1", 00:11:53.573 "uuid": "1015b509-df04-434e-bed8-3de4303e1ffd", 00:11:53.573 "is_configured": true, 00:11:53.573 "data_offset": 0, 00:11:53.573 "data_size": 65536 00:11:53.573 }, 00:11:53.573 { 00:11:53.573 "name": "BaseBdev2", 00:11:53.573 "uuid": "76cba482-e5aa-43bb-9bd1-48e57d07a869", 00:11:53.573 "is_configured": true, 00:11:53.573 "data_offset": 0, 00:11:53.573 "data_size": 65536 00:11:53.573 }, 00:11:53.573 { 00:11:53.573 "name": "BaseBdev3", 00:11:53.573 "uuid": "622f8d90-8b20-4799-9ff6-488b07d73b1a", 00:11:53.573 "is_configured": true, 00:11:53.573 "data_offset": 0, 00:11:53.573 "data_size": 65536 00:11:53.573 }, 00:11:53.573 { 00:11:53.573 "name": "BaseBdev4", 00:11:53.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.573 "is_configured": false, 00:11:53.573 "data_offset": 0, 00:11:53.573 "data_size": 0 00:11:53.573 } 00:11:53.573 ] 00:11:53.573 }' 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.573 04:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.833 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:53.833 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.833 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.833 [2024-12-06 04:02:47.129170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:53.833 [2024-12-06 04:02:47.129314] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:53.833 [2024-12-06 04:02:47.129345] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:53.833 [2024-12-06 04:02:47.129733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:53.833 [2024-12-06 04:02:47.129999] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:53.834 [2024-12-06 04:02:47.130079] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:53.834 [2024-12-06 04:02:47.130454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.834 BaseBdev4 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.834 [ 00:11:53.834 { 00:11:53.834 "name": "BaseBdev4", 00:11:53.834 "aliases": [ 00:11:53.834 "abe2e335-e60d-4a6e-bf19-215de811b29f" 00:11:53.834 ], 00:11:53.834 "product_name": "Malloc disk", 00:11:53.834 "block_size": 512, 00:11:53.834 "num_blocks": 65536, 00:11:53.834 "uuid": "abe2e335-e60d-4a6e-bf19-215de811b29f", 00:11:53.834 "assigned_rate_limits": { 00:11:53.834 "rw_ios_per_sec": 0, 00:11:53.834 "rw_mbytes_per_sec": 0, 00:11:53.834 "r_mbytes_per_sec": 0, 00:11:53.834 "w_mbytes_per_sec": 0 00:11:53.834 }, 00:11:53.834 "claimed": true, 00:11:53.834 "claim_type": "exclusive_write", 00:11:53.834 "zoned": false, 00:11:53.834 "supported_io_types": { 00:11:53.834 "read": true, 00:11:53.834 "write": true, 00:11:53.834 "unmap": true, 00:11:53.834 "flush": true, 00:11:53.834 "reset": true, 00:11:53.834 "nvme_admin": false, 00:11:53.834 "nvme_io": false, 00:11:53.834 "nvme_io_md": false, 00:11:53.834 "write_zeroes": true, 00:11:53.834 "zcopy": true, 00:11:53.834 "get_zone_info": false, 00:11:53.834 "zone_management": false, 00:11:53.834 "zone_append": false, 00:11:53.834 "compare": false, 00:11:53.834 "compare_and_write": false, 00:11:53.834 "abort": true, 00:11:53.834 "seek_hole": false, 00:11:53.834 "seek_data": false, 00:11:53.834 "copy": true, 00:11:53.834 "nvme_iov_md": false 00:11:53.834 }, 00:11:53.834 "memory_domains": [ 00:11:53.834 { 00:11:53.834 "dma_device_id": "system", 00:11:53.834 "dma_device_type": 1 00:11:53.834 }, 00:11:53.834 { 00:11:53.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.834 "dma_device_type": 2 00:11:53.834 } 00:11:53.834 ], 00:11:53.834 "driver_specific": {} 00:11:53.834 } 00:11:53.834 ] 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.834 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.094 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.094 "name": "Existed_Raid", 00:11:54.094 "uuid": "4244e5cd-d0ee-4fd1-a981-3512ac538b0f", 00:11:54.094 "strip_size_kb": 64, 00:11:54.094 "state": "online", 00:11:54.094 "raid_level": "raid0", 00:11:54.094 "superblock": false, 00:11:54.094 "num_base_bdevs": 4, 00:11:54.094 "num_base_bdevs_discovered": 4, 00:11:54.094 "num_base_bdevs_operational": 4, 00:11:54.094 "base_bdevs_list": [ 00:11:54.094 { 00:11:54.094 "name": "BaseBdev1", 00:11:54.094 "uuid": "1015b509-df04-434e-bed8-3de4303e1ffd", 00:11:54.094 "is_configured": true, 00:11:54.094 "data_offset": 0, 00:11:54.094 "data_size": 65536 00:11:54.094 }, 00:11:54.094 { 00:11:54.094 "name": "BaseBdev2", 00:11:54.094 "uuid": "76cba482-e5aa-43bb-9bd1-48e57d07a869", 00:11:54.094 "is_configured": true, 00:11:54.094 "data_offset": 0, 00:11:54.094 "data_size": 65536 00:11:54.094 }, 00:11:54.094 { 00:11:54.094 "name": "BaseBdev3", 00:11:54.094 "uuid": "622f8d90-8b20-4799-9ff6-488b07d73b1a", 00:11:54.094 "is_configured": true, 00:11:54.094 "data_offset": 0, 00:11:54.094 "data_size": 65536 00:11:54.094 }, 00:11:54.094 { 00:11:54.094 "name": "BaseBdev4", 00:11:54.094 "uuid": "abe2e335-e60d-4a6e-bf19-215de811b29f", 00:11:54.094 "is_configured": true, 00:11:54.094 "data_offset": 0, 00:11:54.094 "data_size": 65536 00:11:54.094 } 00:11:54.094 ] 00:11:54.094 }' 00:11:54.094 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.094 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.353 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:54.353 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:54.353 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:54.353 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:54.353 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:54.353 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:54.353 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:54.353 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:54.353 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.353 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.353 [2024-12-06 04:02:47.616833] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:54.353 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.353 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:54.353 "name": "Existed_Raid", 00:11:54.353 "aliases": [ 00:11:54.353 "4244e5cd-d0ee-4fd1-a981-3512ac538b0f" 00:11:54.353 ], 00:11:54.353 "product_name": "Raid Volume", 00:11:54.353 "block_size": 512, 00:11:54.353 "num_blocks": 262144, 00:11:54.353 "uuid": "4244e5cd-d0ee-4fd1-a981-3512ac538b0f", 00:11:54.353 "assigned_rate_limits": { 00:11:54.353 "rw_ios_per_sec": 0, 00:11:54.353 "rw_mbytes_per_sec": 0, 00:11:54.353 "r_mbytes_per_sec": 0, 00:11:54.353 "w_mbytes_per_sec": 0 00:11:54.353 }, 00:11:54.353 "claimed": false, 00:11:54.353 "zoned": false, 00:11:54.353 "supported_io_types": { 00:11:54.353 "read": true, 00:11:54.353 "write": true, 00:11:54.353 "unmap": true, 00:11:54.353 "flush": true, 00:11:54.353 "reset": true, 00:11:54.353 "nvme_admin": false, 00:11:54.353 "nvme_io": false, 00:11:54.353 "nvme_io_md": false, 00:11:54.353 "write_zeroes": true, 00:11:54.353 "zcopy": false, 00:11:54.353 "get_zone_info": false, 00:11:54.353 "zone_management": false, 00:11:54.353 "zone_append": false, 00:11:54.353 "compare": false, 00:11:54.353 "compare_and_write": false, 00:11:54.353 "abort": false, 00:11:54.353 "seek_hole": false, 00:11:54.353 "seek_data": false, 00:11:54.353 "copy": false, 00:11:54.353 "nvme_iov_md": false 00:11:54.353 }, 00:11:54.353 "memory_domains": [ 00:11:54.353 { 00:11:54.353 "dma_device_id": "system", 00:11:54.353 "dma_device_type": 1 00:11:54.353 }, 00:11:54.353 { 00:11:54.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.353 "dma_device_type": 2 00:11:54.353 }, 00:11:54.353 { 00:11:54.353 "dma_device_id": "system", 00:11:54.353 "dma_device_type": 1 00:11:54.353 }, 00:11:54.353 { 00:11:54.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.353 "dma_device_type": 2 00:11:54.353 }, 00:11:54.353 { 00:11:54.353 "dma_device_id": "system", 00:11:54.353 "dma_device_type": 1 00:11:54.353 }, 00:11:54.353 { 00:11:54.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.353 "dma_device_type": 2 00:11:54.353 }, 00:11:54.353 { 00:11:54.353 "dma_device_id": "system", 00:11:54.353 "dma_device_type": 1 00:11:54.353 }, 00:11:54.353 { 00:11:54.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.353 "dma_device_type": 2 00:11:54.353 } 00:11:54.353 ], 00:11:54.354 "driver_specific": { 00:11:54.354 "raid": { 00:11:54.354 "uuid": "4244e5cd-d0ee-4fd1-a981-3512ac538b0f", 00:11:54.354 "strip_size_kb": 64, 00:11:54.354 "state": "online", 00:11:54.354 "raid_level": "raid0", 00:11:54.354 "superblock": false, 00:11:54.354 "num_base_bdevs": 4, 00:11:54.354 "num_base_bdevs_discovered": 4, 00:11:54.354 "num_base_bdevs_operational": 4, 00:11:54.354 "base_bdevs_list": [ 00:11:54.354 { 00:11:54.354 "name": "BaseBdev1", 00:11:54.354 "uuid": "1015b509-df04-434e-bed8-3de4303e1ffd", 00:11:54.354 "is_configured": true, 00:11:54.354 "data_offset": 0, 00:11:54.354 "data_size": 65536 00:11:54.354 }, 00:11:54.354 { 00:11:54.354 "name": "BaseBdev2", 00:11:54.354 "uuid": "76cba482-e5aa-43bb-9bd1-48e57d07a869", 00:11:54.354 "is_configured": true, 00:11:54.354 "data_offset": 0, 00:11:54.354 "data_size": 65536 00:11:54.354 }, 00:11:54.354 { 00:11:54.354 "name": "BaseBdev3", 00:11:54.354 "uuid": "622f8d90-8b20-4799-9ff6-488b07d73b1a", 00:11:54.354 "is_configured": true, 00:11:54.354 "data_offset": 0, 00:11:54.354 "data_size": 65536 00:11:54.354 }, 00:11:54.354 { 00:11:54.354 "name": "BaseBdev4", 00:11:54.354 "uuid": "abe2e335-e60d-4a6e-bf19-215de811b29f", 00:11:54.354 "is_configured": true, 00:11:54.354 "data_offset": 0, 00:11:54.354 "data_size": 65536 00:11:54.354 } 00:11:54.354 ] 00:11:54.354 } 00:11:54.354 } 00:11:54.354 }' 00:11:54.354 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:54.613 BaseBdev2 00:11:54.613 BaseBdev3 00:11:54.613 BaseBdev4' 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.613 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.614 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:54.614 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.614 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.614 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.614 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.614 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.614 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.614 04:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:54.614 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.614 04:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.614 [2024-12-06 04:02:47.947968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:54.614 [2024-12-06 04:02:47.948000] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.614 [2024-12-06 04:02:47.948073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.872 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.872 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:54.872 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:54.872 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:54.872 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:54.872 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:54.872 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:54.872 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.872 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:54.872 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:54.872 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.872 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:54.872 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.872 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.872 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.872 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.872 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.872 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.872 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.872 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.872 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.872 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.872 "name": "Existed_Raid", 00:11:54.872 "uuid": "4244e5cd-d0ee-4fd1-a981-3512ac538b0f", 00:11:54.872 "strip_size_kb": 64, 00:11:54.872 "state": "offline", 00:11:54.872 "raid_level": "raid0", 00:11:54.872 "superblock": false, 00:11:54.872 "num_base_bdevs": 4, 00:11:54.872 "num_base_bdevs_discovered": 3, 00:11:54.872 "num_base_bdevs_operational": 3, 00:11:54.872 "base_bdevs_list": [ 00:11:54.872 { 00:11:54.872 "name": null, 00:11:54.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.872 "is_configured": false, 00:11:54.872 "data_offset": 0, 00:11:54.872 "data_size": 65536 00:11:54.872 }, 00:11:54.872 { 00:11:54.872 "name": "BaseBdev2", 00:11:54.872 "uuid": "76cba482-e5aa-43bb-9bd1-48e57d07a869", 00:11:54.872 "is_configured": true, 00:11:54.872 "data_offset": 0, 00:11:54.872 "data_size": 65536 00:11:54.872 }, 00:11:54.872 { 00:11:54.872 "name": "BaseBdev3", 00:11:54.872 "uuid": "622f8d90-8b20-4799-9ff6-488b07d73b1a", 00:11:54.872 "is_configured": true, 00:11:54.872 "data_offset": 0, 00:11:54.872 "data_size": 65536 00:11:54.872 }, 00:11:54.872 { 00:11:54.872 "name": "BaseBdev4", 00:11:54.872 "uuid": "abe2e335-e60d-4a6e-bf19-215de811b29f", 00:11:54.872 "is_configured": true, 00:11:54.872 "data_offset": 0, 00:11:54.872 "data_size": 65536 00:11:54.872 } 00:11:54.872 ] 00:11:54.872 }' 00:11:54.873 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.873 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.131 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:55.131 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.131 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.131 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.131 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.131 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:55.131 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.390 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:55.390 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:55.390 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:55.390 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.390 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.390 [2024-12-06 04:02:48.491275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:55.390 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.390 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:55.390 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.390 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.390 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:55.390 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.390 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.390 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.390 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:55.390 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:55.390 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:55.390 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.390 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.390 [2024-12-06 04:02:48.649816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:55.649 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.649 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.650 [2024-12-06 04:02:48.810713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:55.650 [2024-12-06 04:02:48.810814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.650 04:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.909 BaseBdev2 00:11:55.909 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.909 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:55.909 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:55.909 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:55.909 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:55.909 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:55.909 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:55.909 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:55.909 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.909 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.909 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.909 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:55.909 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.909 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.909 [ 00:11:55.909 { 00:11:55.909 "name": "BaseBdev2", 00:11:55.909 "aliases": [ 00:11:55.909 "5e31c4f6-af0b-4d48-b18d-1bd2d46e9038" 00:11:55.909 ], 00:11:55.909 "product_name": "Malloc disk", 00:11:55.909 "block_size": 512, 00:11:55.909 "num_blocks": 65536, 00:11:55.909 "uuid": "5e31c4f6-af0b-4d48-b18d-1bd2d46e9038", 00:11:55.909 "assigned_rate_limits": { 00:11:55.909 "rw_ios_per_sec": 0, 00:11:55.909 "rw_mbytes_per_sec": 0, 00:11:55.909 "r_mbytes_per_sec": 0, 00:11:55.909 "w_mbytes_per_sec": 0 00:11:55.909 }, 00:11:55.909 "claimed": false, 00:11:55.909 "zoned": false, 00:11:55.909 "supported_io_types": { 00:11:55.909 "read": true, 00:11:55.909 "write": true, 00:11:55.909 "unmap": true, 00:11:55.909 "flush": true, 00:11:55.909 "reset": true, 00:11:55.909 "nvme_admin": false, 00:11:55.909 "nvme_io": false, 00:11:55.909 "nvme_io_md": false, 00:11:55.909 "write_zeroes": true, 00:11:55.909 "zcopy": true, 00:11:55.909 "get_zone_info": false, 00:11:55.909 "zone_management": false, 00:11:55.909 "zone_append": false, 00:11:55.909 "compare": false, 00:11:55.909 "compare_and_write": false, 00:11:55.909 "abort": true, 00:11:55.909 "seek_hole": false, 00:11:55.909 "seek_data": false, 00:11:55.909 "copy": true, 00:11:55.909 "nvme_iov_md": false 00:11:55.910 }, 00:11:55.910 "memory_domains": [ 00:11:55.910 { 00:11:55.910 "dma_device_id": "system", 00:11:55.910 "dma_device_type": 1 00:11:55.910 }, 00:11:55.910 { 00:11:55.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.910 "dma_device_type": 2 00:11:55.910 } 00:11:55.910 ], 00:11:55.910 "driver_specific": {} 00:11:55.910 } 00:11:55.910 ] 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.910 BaseBdev3 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.910 [ 00:11:55.910 { 00:11:55.910 "name": "BaseBdev3", 00:11:55.910 "aliases": [ 00:11:55.910 "7ee6b33b-8c4a-4222-a459-751231bdfbc2" 00:11:55.910 ], 00:11:55.910 "product_name": "Malloc disk", 00:11:55.910 "block_size": 512, 00:11:55.910 "num_blocks": 65536, 00:11:55.910 "uuid": "7ee6b33b-8c4a-4222-a459-751231bdfbc2", 00:11:55.910 "assigned_rate_limits": { 00:11:55.910 "rw_ios_per_sec": 0, 00:11:55.910 "rw_mbytes_per_sec": 0, 00:11:55.910 "r_mbytes_per_sec": 0, 00:11:55.910 "w_mbytes_per_sec": 0 00:11:55.910 }, 00:11:55.910 "claimed": false, 00:11:55.910 "zoned": false, 00:11:55.910 "supported_io_types": { 00:11:55.910 "read": true, 00:11:55.910 "write": true, 00:11:55.910 "unmap": true, 00:11:55.910 "flush": true, 00:11:55.910 "reset": true, 00:11:55.910 "nvme_admin": false, 00:11:55.910 "nvme_io": false, 00:11:55.910 "nvme_io_md": false, 00:11:55.910 "write_zeroes": true, 00:11:55.910 "zcopy": true, 00:11:55.910 "get_zone_info": false, 00:11:55.910 "zone_management": false, 00:11:55.910 "zone_append": false, 00:11:55.910 "compare": false, 00:11:55.910 "compare_and_write": false, 00:11:55.910 "abort": true, 00:11:55.910 "seek_hole": false, 00:11:55.910 "seek_data": false, 00:11:55.910 "copy": true, 00:11:55.910 "nvme_iov_md": false 00:11:55.910 }, 00:11:55.910 "memory_domains": [ 00:11:55.910 { 00:11:55.910 "dma_device_id": "system", 00:11:55.910 "dma_device_type": 1 00:11:55.910 }, 00:11:55.910 { 00:11:55.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.910 "dma_device_type": 2 00:11:55.910 } 00:11:55.910 ], 00:11:55.910 "driver_specific": {} 00:11:55.910 } 00:11:55.910 ] 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.910 BaseBdev4 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.910 [ 00:11:55.910 { 00:11:55.910 "name": "BaseBdev4", 00:11:55.910 "aliases": [ 00:11:55.910 "fc905d50-8121-4fd5-81fa-5193fce36143" 00:11:55.910 ], 00:11:55.910 "product_name": "Malloc disk", 00:11:55.910 "block_size": 512, 00:11:55.910 "num_blocks": 65536, 00:11:55.910 "uuid": "fc905d50-8121-4fd5-81fa-5193fce36143", 00:11:55.910 "assigned_rate_limits": { 00:11:55.910 "rw_ios_per_sec": 0, 00:11:55.910 "rw_mbytes_per_sec": 0, 00:11:55.910 "r_mbytes_per_sec": 0, 00:11:55.910 "w_mbytes_per_sec": 0 00:11:55.910 }, 00:11:55.910 "claimed": false, 00:11:55.910 "zoned": false, 00:11:55.910 "supported_io_types": { 00:11:55.910 "read": true, 00:11:55.910 "write": true, 00:11:55.910 "unmap": true, 00:11:55.910 "flush": true, 00:11:55.910 "reset": true, 00:11:55.910 "nvme_admin": false, 00:11:55.910 "nvme_io": false, 00:11:55.910 "nvme_io_md": false, 00:11:55.910 "write_zeroes": true, 00:11:55.910 "zcopy": true, 00:11:55.910 "get_zone_info": false, 00:11:55.910 "zone_management": false, 00:11:55.910 "zone_append": false, 00:11:55.910 "compare": false, 00:11:55.910 "compare_and_write": false, 00:11:55.910 "abort": true, 00:11:55.910 "seek_hole": false, 00:11:55.910 "seek_data": false, 00:11:55.910 "copy": true, 00:11:55.910 "nvme_iov_md": false 00:11:55.910 }, 00:11:55.910 "memory_domains": [ 00:11:55.910 { 00:11:55.910 "dma_device_id": "system", 00:11:55.910 "dma_device_type": 1 00:11:55.910 }, 00:11:55.910 { 00:11:55.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.910 "dma_device_type": 2 00:11:55.910 } 00:11:55.910 ], 00:11:55.910 "driver_specific": {} 00:11:55.910 } 00:11:55.910 ] 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.910 [2024-12-06 04:02:49.185640] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:55.910 [2024-12-06 04:02:49.185690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:55.910 [2024-12-06 04:02:49.185729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.910 [2024-12-06 04:02:49.187827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:55.910 [2024-12-06 04:02:49.187884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.910 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.911 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.911 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.911 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.911 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.911 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.911 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.911 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.911 "name": "Existed_Raid", 00:11:55.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.911 "strip_size_kb": 64, 00:11:55.911 "state": "configuring", 00:11:55.911 "raid_level": "raid0", 00:11:55.911 "superblock": false, 00:11:55.911 "num_base_bdevs": 4, 00:11:55.911 "num_base_bdevs_discovered": 3, 00:11:55.911 "num_base_bdevs_operational": 4, 00:11:55.911 "base_bdevs_list": [ 00:11:55.911 { 00:11:55.911 "name": "BaseBdev1", 00:11:55.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.911 "is_configured": false, 00:11:55.911 "data_offset": 0, 00:11:55.911 "data_size": 0 00:11:55.911 }, 00:11:55.911 { 00:11:55.911 "name": "BaseBdev2", 00:11:55.911 "uuid": "5e31c4f6-af0b-4d48-b18d-1bd2d46e9038", 00:11:55.911 "is_configured": true, 00:11:55.911 "data_offset": 0, 00:11:55.911 "data_size": 65536 00:11:55.911 }, 00:11:55.911 { 00:11:55.911 "name": "BaseBdev3", 00:11:55.911 "uuid": "7ee6b33b-8c4a-4222-a459-751231bdfbc2", 00:11:55.911 "is_configured": true, 00:11:55.911 "data_offset": 0, 00:11:55.911 "data_size": 65536 00:11:55.911 }, 00:11:55.911 { 00:11:55.911 "name": "BaseBdev4", 00:11:55.911 "uuid": "fc905d50-8121-4fd5-81fa-5193fce36143", 00:11:55.911 "is_configured": true, 00:11:55.911 "data_offset": 0, 00:11:55.911 "data_size": 65536 00:11:55.911 } 00:11:55.911 ] 00:11:55.911 }' 00:11:55.911 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.911 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.479 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:56.479 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.479 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.479 [2024-12-06 04:02:49.636899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:56.479 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.479 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:56.479 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.479 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.479 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:56.479 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.479 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.479 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.479 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.479 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.479 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.479 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.479 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.479 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.479 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.479 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.479 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.479 "name": "Existed_Raid", 00:11:56.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.479 "strip_size_kb": 64, 00:11:56.479 "state": "configuring", 00:11:56.479 "raid_level": "raid0", 00:11:56.479 "superblock": false, 00:11:56.479 "num_base_bdevs": 4, 00:11:56.479 "num_base_bdevs_discovered": 2, 00:11:56.479 "num_base_bdevs_operational": 4, 00:11:56.479 "base_bdevs_list": [ 00:11:56.479 { 00:11:56.479 "name": "BaseBdev1", 00:11:56.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.479 "is_configured": false, 00:11:56.479 "data_offset": 0, 00:11:56.479 "data_size": 0 00:11:56.479 }, 00:11:56.479 { 00:11:56.479 "name": null, 00:11:56.479 "uuid": "5e31c4f6-af0b-4d48-b18d-1bd2d46e9038", 00:11:56.479 "is_configured": false, 00:11:56.479 "data_offset": 0, 00:11:56.479 "data_size": 65536 00:11:56.479 }, 00:11:56.479 { 00:11:56.479 "name": "BaseBdev3", 00:11:56.479 "uuid": "7ee6b33b-8c4a-4222-a459-751231bdfbc2", 00:11:56.479 "is_configured": true, 00:11:56.479 "data_offset": 0, 00:11:56.479 "data_size": 65536 00:11:56.479 }, 00:11:56.479 { 00:11:56.479 "name": "BaseBdev4", 00:11:56.479 "uuid": "fc905d50-8121-4fd5-81fa-5193fce36143", 00:11:56.479 "is_configured": true, 00:11:56.479 "data_offset": 0, 00:11:56.479 "data_size": 65536 00:11:56.479 } 00:11:56.479 ] 00:11:56.479 }' 00:11:56.479 04:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.479 04:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.739 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.739 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.739 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.739 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.004 [2024-12-06 04:02:50.171713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.004 BaseBdev1 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.004 [ 00:11:57.004 { 00:11:57.004 "name": "BaseBdev1", 00:11:57.004 "aliases": [ 00:11:57.004 "b7751820-3692-4286-afdc-a580e17c4aea" 00:11:57.004 ], 00:11:57.004 "product_name": "Malloc disk", 00:11:57.004 "block_size": 512, 00:11:57.004 "num_blocks": 65536, 00:11:57.004 "uuid": "b7751820-3692-4286-afdc-a580e17c4aea", 00:11:57.004 "assigned_rate_limits": { 00:11:57.004 "rw_ios_per_sec": 0, 00:11:57.004 "rw_mbytes_per_sec": 0, 00:11:57.004 "r_mbytes_per_sec": 0, 00:11:57.004 "w_mbytes_per_sec": 0 00:11:57.004 }, 00:11:57.004 "claimed": true, 00:11:57.004 "claim_type": "exclusive_write", 00:11:57.004 "zoned": false, 00:11:57.004 "supported_io_types": { 00:11:57.004 "read": true, 00:11:57.004 "write": true, 00:11:57.004 "unmap": true, 00:11:57.004 "flush": true, 00:11:57.004 "reset": true, 00:11:57.004 "nvme_admin": false, 00:11:57.004 "nvme_io": false, 00:11:57.004 "nvme_io_md": false, 00:11:57.004 "write_zeroes": true, 00:11:57.004 "zcopy": true, 00:11:57.004 "get_zone_info": false, 00:11:57.004 "zone_management": false, 00:11:57.004 "zone_append": false, 00:11:57.004 "compare": false, 00:11:57.004 "compare_and_write": false, 00:11:57.004 "abort": true, 00:11:57.004 "seek_hole": false, 00:11:57.004 "seek_data": false, 00:11:57.004 "copy": true, 00:11:57.004 "nvme_iov_md": false 00:11:57.004 }, 00:11:57.004 "memory_domains": [ 00:11:57.004 { 00:11:57.004 "dma_device_id": "system", 00:11:57.004 "dma_device_type": 1 00:11:57.004 }, 00:11:57.004 { 00:11:57.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.004 "dma_device_type": 2 00:11:57.004 } 00:11:57.004 ], 00:11:57.004 "driver_specific": {} 00:11:57.004 } 00:11:57.004 ] 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.004 "name": "Existed_Raid", 00:11:57.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.004 "strip_size_kb": 64, 00:11:57.004 "state": "configuring", 00:11:57.004 "raid_level": "raid0", 00:11:57.004 "superblock": false, 00:11:57.004 "num_base_bdevs": 4, 00:11:57.004 "num_base_bdevs_discovered": 3, 00:11:57.004 "num_base_bdevs_operational": 4, 00:11:57.004 "base_bdevs_list": [ 00:11:57.004 { 00:11:57.004 "name": "BaseBdev1", 00:11:57.004 "uuid": "b7751820-3692-4286-afdc-a580e17c4aea", 00:11:57.004 "is_configured": true, 00:11:57.004 "data_offset": 0, 00:11:57.004 "data_size": 65536 00:11:57.004 }, 00:11:57.004 { 00:11:57.004 "name": null, 00:11:57.004 "uuid": "5e31c4f6-af0b-4d48-b18d-1bd2d46e9038", 00:11:57.004 "is_configured": false, 00:11:57.004 "data_offset": 0, 00:11:57.004 "data_size": 65536 00:11:57.004 }, 00:11:57.004 { 00:11:57.004 "name": "BaseBdev3", 00:11:57.004 "uuid": "7ee6b33b-8c4a-4222-a459-751231bdfbc2", 00:11:57.004 "is_configured": true, 00:11:57.004 "data_offset": 0, 00:11:57.004 "data_size": 65536 00:11:57.004 }, 00:11:57.004 { 00:11:57.004 "name": "BaseBdev4", 00:11:57.004 "uuid": "fc905d50-8121-4fd5-81fa-5193fce36143", 00:11:57.004 "is_configured": true, 00:11:57.004 "data_offset": 0, 00:11:57.004 "data_size": 65536 00:11:57.004 } 00:11:57.004 ] 00:11:57.004 }' 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.004 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.570 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.571 [2024-12-06 04:02:50.702882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.571 "name": "Existed_Raid", 00:11:57.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.571 "strip_size_kb": 64, 00:11:57.571 "state": "configuring", 00:11:57.571 "raid_level": "raid0", 00:11:57.571 "superblock": false, 00:11:57.571 "num_base_bdevs": 4, 00:11:57.571 "num_base_bdevs_discovered": 2, 00:11:57.571 "num_base_bdevs_operational": 4, 00:11:57.571 "base_bdevs_list": [ 00:11:57.571 { 00:11:57.571 "name": "BaseBdev1", 00:11:57.571 "uuid": "b7751820-3692-4286-afdc-a580e17c4aea", 00:11:57.571 "is_configured": true, 00:11:57.571 "data_offset": 0, 00:11:57.571 "data_size": 65536 00:11:57.571 }, 00:11:57.571 { 00:11:57.571 "name": null, 00:11:57.571 "uuid": "5e31c4f6-af0b-4d48-b18d-1bd2d46e9038", 00:11:57.571 "is_configured": false, 00:11:57.571 "data_offset": 0, 00:11:57.571 "data_size": 65536 00:11:57.571 }, 00:11:57.571 { 00:11:57.571 "name": null, 00:11:57.571 "uuid": "7ee6b33b-8c4a-4222-a459-751231bdfbc2", 00:11:57.571 "is_configured": false, 00:11:57.571 "data_offset": 0, 00:11:57.571 "data_size": 65536 00:11:57.571 }, 00:11:57.571 { 00:11:57.571 "name": "BaseBdev4", 00:11:57.571 "uuid": "fc905d50-8121-4fd5-81fa-5193fce36143", 00:11:57.571 "is_configured": true, 00:11:57.571 "data_offset": 0, 00:11:57.571 "data_size": 65536 00:11:57.571 } 00:11:57.571 ] 00:11:57.571 }' 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.571 04:02:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.136 [2024-12-06 04:02:51.242019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.136 "name": "Existed_Raid", 00:11:58.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.136 "strip_size_kb": 64, 00:11:58.136 "state": "configuring", 00:11:58.136 "raid_level": "raid0", 00:11:58.136 "superblock": false, 00:11:58.136 "num_base_bdevs": 4, 00:11:58.136 "num_base_bdevs_discovered": 3, 00:11:58.136 "num_base_bdevs_operational": 4, 00:11:58.136 "base_bdevs_list": [ 00:11:58.136 { 00:11:58.136 "name": "BaseBdev1", 00:11:58.136 "uuid": "b7751820-3692-4286-afdc-a580e17c4aea", 00:11:58.136 "is_configured": true, 00:11:58.136 "data_offset": 0, 00:11:58.136 "data_size": 65536 00:11:58.136 }, 00:11:58.136 { 00:11:58.136 "name": null, 00:11:58.136 "uuid": "5e31c4f6-af0b-4d48-b18d-1bd2d46e9038", 00:11:58.136 "is_configured": false, 00:11:58.136 "data_offset": 0, 00:11:58.136 "data_size": 65536 00:11:58.136 }, 00:11:58.136 { 00:11:58.136 "name": "BaseBdev3", 00:11:58.136 "uuid": "7ee6b33b-8c4a-4222-a459-751231bdfbc2", 00:11:58.136 "is_configured": true, 00:11:58.136 "data_offset": 0, 00:11:58.136 "data_size": 65536 00:11:58.136 }, 00:11:58.136 { 00:11:58.136 "name": "BaseBdev4", 00:11:58.136 "uuid": "fc905d50-8121-4fd5-81fa-5193fce36143", 00:11:58.136 "is_configured": true, 00:11:58.136 "data_offset": 0, 00:11:58.136 "data_size": 65536 00:11:58.136 } 00:11:58.136 ] 00:11:58.136 }' 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.136 04:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.394 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.394 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:58.394 04:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.394 04:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.394 04:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.394 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:58.394 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:58.394 04:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.394 04:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.394 [2024-12-06 04:02:51.737219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:58.653 04:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.653 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:58.653 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.653 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.653 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.653 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.653 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.653 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.653 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.653 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.653 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.653 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.653 04:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.653 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.653 04:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.653 04:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.653 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.653 "name": "Existed_Raid", 00:11:58.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.653 "strip_size_kb": 64, 00:11:58.653 "state": "configuring", 00:11:58.653 "raid_level": "raid0", 00:11:58.653 "superblock": false, 00:11:58.653 "num_base_bdevs": 4, 00:11:58.653 "num_base_bdevs_discovered": 2, 00:11:58.653 "num_base_bdevs_operational": 4, 00:11:58.653 "base_bdevs_list": [ 00:11:58.653 { 00:11:58.653 "name": null, 00:11:58.653 "uuid": "b7751820-3692-4286-afdc-a580e17c4aea", 00:11:58.653 "is_configured": false, 00:11:58.653 "data_offset": 0, 00:11:58.653 "data_size": 65536 00:11:58.653 }, 00:11:58.653 { 00:11:58.653 "name": null, 00:11:58.653 "uuid": "5e31c4f6-af0b-4d48-b18d-1bd2d46e9038", 00:11:58.653 "is_configured": false, 00:11:58.653 "data_offset": 0, 00:11:58.653 "data_size": 65536 00:11:58.653 }, 00:11:58.653 { 00:11:58.653 "name": "BaseBdev3", 00:11:58.653 "uuid": "7ee6b33b-8c4a-4222-a459-751231bdfbc2", 00:11:58.653 "is_configured": true, 00:11:58.653 "data_offset": 0, 00:11:58.653 "data_size": 65536 00:11:58.653 }, 00:11:58.653 { 00:11:58.653 "name": "BaseBdev4", 00:11:58.653 "uuid": "fc905d50-8121-4fd5-81fa-5193fce36143", 00:11:58.653 "is_configured": true, 00:11:58.653 "data_offset": 0, 00:11:58.653 "data_size": 65536 00:11:58.653 } 00:11:58.653 ] 00:11:58.653 }' 00:11:58.653 04:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.653 04:02:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.225 [2024-12-06 04:02:52.338100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.225 "name": "Existed_Raid", 00:11:59.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.225 "strip_size_kb": 64, 00:11:59.225 "state": "configuring", 00:11:59.225 "raid_level": "raid0", 00:11:59.225 "superblock": false, 00:11:59.225 "num_base_bdevs": 4, 00:11:59.225 "num_base_bdevs_discovered": 3, 00:11:59.225 "num_base_bdevs_operational": 4, 00:11:59.225 "base_bdevs_list": [ 00:11:59.225 { 00:11:59.225 "name": null, 00:11:59.225 "uuid": "b7751820-3692-4286-afdc-a580e17c4aea", 00:11:59.225 "is_configured": false, 00:11:59.225 "data_offset": 0, 00:11:59.225 "data_size": 65536 00:11:59.225 }, 00:11:59.225 { 00:11:59.225 "name": "BaseBdev2", 00:11:59.225 "uuid": "5e31c4f6-af0b-4d48-b18d-1bd2d46e9038", 00:11:59.225 "is_configured": true, 00:11:59.225 "data_offset": 0, 00:11:59.225 "data_size": 65536 00:11:59.225 }, 00:11:59.225 { 00:11:59.225 "name": "BaseBdev3", 00:11:59.225 "uuid": "7ee6b33b-8c4a-4222-a459-751231bdfbc2", 00:11:59.225 "is_configured": true, 00:11:59.225 "data_offset": 0, 00:11:59.225 "data_size": 65536 00:11:59.225 }, 00:11:59.225 { 00:11:59.225 "name": "BaseBdev4", 00:11:59.225 "uuid": "fc905d50-8121-4fd5-81fa-5193fce36143", 00:11:59.225 "is_configured": true, 00:11:59.225 "data_offset": 0, 00:11:59.225 "data_size": 65536 00:11:59.225 } 00:11:59.225 ] 00:11:59.225 }' 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.225 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b7751820-3692-4286-afdc-a580e17c4aea 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.489 [2024-12-06 04:02:52.832629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:59.489 [2024-12-06 04:02:52.832753] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:59.489 [2024-12-06 04:02:52.832783] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:59.489 [2024-12-06 04:02:52.833150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:59.489 [2024-12-06 04:02:52.833378] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:59.489 [2024-12-06 04:02:52.833436] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:59.489 [2024-12-06 04:02:52.833757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.489 NewBaseBdev 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.489 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.747 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.747 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:59.747 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.747 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.747 [ 00:11:59.747 { 00:11:59.747 "name": "NewBaseBdev", 00:11:59.747 "aliases": [ 00:11:59.747 "b7751820-3692-4286-afdc-a580e17c4aea" 00:11:59.747 ], 00:11:59.747 "product_name": "Malloc disk", 00:11:59.747 "block_size": 512, 00:11:59.747 "num_blocks": 65536, 00:11:59.747 "uuid": "b7751820-3692-4286-afdc-a580e17c4aea", 00:11:59.747 "assigned_rate_limits": { 00:11:59.747 "rw_ios_per_sec": 0, 00:11:59.747 "rw_mbytes_per_sec": 0, 00:11:59.747 "r_mbytes_per_sec": 0, 00:11:59.747 "w_mbytes_per_sec": 0 00:11:59.747 }, 00:11:59.747 "claimed": true, 00:11:59.747 "claim_type": "exclusive_write", 00:11:59.747 "zoned": false, 00:11:59.747 "supported_io_types": { 00:11:59.747 "read": true, 00:11:59.747 "write": true, 00:11:59.747 "unmap": true, 00:11:59.747 "flush": true, 00:11:59.747 "reset": true, 00:11:59.747 "nvme_admin": false, 00:11:59.747 "nvme_io": false, 00:11:59.747 "nvme_io_md": false, 00:11:59.747 "write_zeroes": true, 00:11:59.747 "zcopy": true, 00:11:59.747 "get_zone_info": false, 00:11:59.747 "zone_management": false, 00:11:59.747 "zone_append": false, 00:11:59.747 "compare": false, 00:11:59.748 "compare_and_write": false, 00:11:59.748 "abort": true, 00:11:59.748 "seek_hole": false, 00:11:59.748 "seek_data": false, 00:11:59.748 "copy": true, 00:11:59.748 "nvme_iov_md": false 00:11:59.748 }, 00:11:59.748 "memory_domains": [ 00:11:59.748 { 00:11:59.748 "dma_device_id": "system", 00:11:59.748 "dma_device_type": 1 00:11:59.748 }, 00:11:59.748 { 00:11:59.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.748 "dma_device_type": 2 00:11:59.748 } 00:11:59.748 ], 00:11:59.748 "driver_specific": {} 00:11:59.748 } 00:11:59.748 ] 00:11:59.748 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.748 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:59.748 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:59.748 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.748 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.748 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:59.748 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.748 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.748 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.748 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.748 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.748 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.748 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.748 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.748 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.748 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.748 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.748 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.748 "name": "Existed_Raid", 00:11:59.748 "uuid": "ef999437-d5b2-4bd4-bd5b-d35963626701", 00:11:59.748 "strip_size_kb": 64, 00:11:59.748 "state": "online", 00:11:59.748 "raid_level": "raid0", 00:11:59.748 "superblock": false, 00:11:59.748 "num_base_bdevs": 4, 00:11:59.748 "num_base_bdevs_discovered": 4, 00:11:59.748 "num_base_bdevs_operational": 4, 00:11:59.748 "base_bdevs_list": [ 00:11:59.748 { 00:11:59.748 "name": "NewBaseBdev", 00:11:59.748 "uuid": "b7751820-3692-4286-afdc-a580e17c4aea", 00:11:59.748 "is_configured": true, 00:11:59.748 "data_offset": 0, 00:11:59.748 "data_size": 65536 00:11:59.748 }, 00:11:59.748 { 00:11:59.748 "name": "BaseBdev2", 00:11:59.748 "uuid": "5e31c4f6-af0b-4d48-b18d-1bd2d46e9038", 00:11:59.748 "is_configured": true, 00:11:59.748 "data_offset": 0, 00:11:59.748 "data_size": 65536 00:11:59.748 }, 00:11:59.748 { 00:11:59.748 "name": "BaseBdev3", 00:11:59.748 "uuid": "7ee6b33b-8c4a-4222-a459-751231bdfbc2", 00:11:59.748 "is_configured": true, 00:11:59.748 "data_offset": 0, 00:11:59.748 "data_size": 65536 00:11:59.748 }, 00:11:59.748 { 00:11:59.748 "name": "BaseBdev4", 00:11:59.748 "uuid": "fc905d50-8121-4fd5-81fa-5193fce36143", 00:11:59.748 "is_configured": true, 00:11:59.748 "data_offset": 0, 00:11:59.748 "data_size": 65536 00:11:59.748 } 00:11:59.748 ] 00:11:59.748 }' 00:11:59.748 04:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.748 04:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.006 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:00.006 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:00.006 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:00.006 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:00.006 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:00.006 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:00.006 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:00.006 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:00.006 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.006 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.006 [2024-12-06 04:02:53.352277] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:00.264 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.264 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:00.264 "name": "Existed_Raid", 00:12:00.264 "aliases": [ 00:12:00.264 "ef999437-d5b2-4bd4-bd5b-d35963626701" 00:12:00.264 ], 00:12:00.264 "product_name": "Raid Volume", 00:12:00.264 "block_size": 512, 00:12:00.264 "num_blocks": 262144, 00:12:00.264 "uuid": "ef999437-d5b2-4bd4-bd5b-d35963626701", 00:12:00.264 "assigned_rate_limits": { 00:12:00.264 "rw_ios_per_sec": 0, 00:12:00.265 "rw_mbytes_per_sec": 0, 00:12:00.265 "r_mbytes_per_sec": 0, 00:12:00.265 "w_mbytes_per_sec": 0 00:12:00.265 }, 00:12:00.265 "claimed": false, 00:12:00.265 "zoned": false, 00:12:00.265 "supported_io_types": { 00:12:00.265 "read": true, 00:12:00.265 "write": true, 00:12:00.265 "unmap": true, 00:12:00.265 "flush": true, 00:12:00.265 "reset": true, 00:12:00.265 "nvme_admin": false, 00:12:00.265 "nvme_io": false, 00:12:00.265 "nvme_io_md": false, 00:12:00.265 "write_zeroes": true, 00:12:00.265 "zcopy": false, 00:12:00.265 "get_zone_info": false, 00:12:00.265 "zone_management": false, 00:12:00.265 "zone_append": false, 00:12:00.265 "compare": false, 00:12:00.265 "compare_and_write": false, 00:12:00.265 "abort": false, 00:12:00.265 "seek_hole": false, 00:12:00.265 "seek_data": false, 00:12:00.265 "copy": false, 00:12:00.265 "nvme_iov_md": false 00:12:00.265 }, 00:12:00.265 "memory_domains": [ 00:12:00.265 { 00:12:00.265 "dma_device_id": "system", 00:12:00.265 "dma_device_type": 1 00:12:00.265 }, 00:12:00.265 { 00:12:00.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.265 "dma_device_type": 2 00:12:00.265 }, 00:12:00.265 { 00:12:00.265 "dma_device_id": "system", 00:12:00.265 "dma_device_type": 1 00:12:00.265 }, 00:12:00.265 { 00:12:00.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.265 "dma_device_type": 2 00:12:00.265 }, 00:12:00.265 { 00:12:00.265 "dma_device_id": "system", 00:12:00.265 "dma_device_type": 1 00:12:00.265 }, 00:12:00.265 { 00:12:00.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.265 "dma_device_type": 2 00:12:00.265 }, 00:12:00.265 { 00:12:00.265 "dma_device_id": "system", 00:12:00.265 "dma_device_type": 1 00:12:00.265 }, 00:12:00.265 { 00:12:00.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.265 "dma_device_type": 2 00:12:00.265 } 00:12:00.265 ], 00:12:00.265 "driver_specific": { 00:12:00.265 "raid": { 00:12:00.265 "uuid": "ef999437-d5b2-4bd4-bd5b-d35963626701", 00:12:00.265 "strip_size_kb": 64, 00:12:00.265 "state": "online", 00:12:00.265 "raid_level": "raid0", 00:12:00.265 "superblock": false, 00:12:00.265 "num_base_bdevs": 4, 00:12:00.265 "num_base_bdevs_discovered": 4, 00:12:00.265 "num_base_bdevs_operational": 4, 00:12:00.265 "base_bdevs_list": [ 00:12:00.265 { 00:12:00.265 "name": "NewBaseBdev", 00:12:00.265 "uuid": "b7751820-3692-4286-afdc-a580e17c4aea", 00:12:00.265 "is_configured": true, 00:12:00.265 "data_offset": 0, 00:12:00.265 "data_size": 65536 00:12:00.265 }, 00:12:00.265 { 00:12:00.265 "name": "BaseBdev2", 00:12:00.265 "uuid": "5e31c4f6-af0b-4d48-b18d-1bd2d46e9038", 00:12:00.265 "is_configured": true, 00:12:00.265 "data_offset": 0, 00:12:00.265 "data_size": 65536 00:12:00.265 }, 00:12:00.265 { 00:12:00.265 "name": "BaseBdev3", 00:12:00.265 "uuid": "7ee6b33b-8c4a-4222-a459-751231bdfbc2", 00:12:00.265 "is_configured": true, 00:12:00.265 "data_offset": 0, 00:12:00.265 "data_size": 65536 00:12:00.265 }, 00:12:00.265 { 00:12:00.265 "name": "BaseBdev4", 00:12:00.265 "uuid": "fc905d50-8121-4fd5-81fa-5193fce36143", 00:12:00.265 "is_configured": true, 00:12:00.265 "data_offset": 0, 00:12:00.265 "data_size": 65536 00:12:00.265 } 00:12:00.265 ] 00:12:00.265 } 00:12:00.265 } 00:12:00.265 }' 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:00.265 BaseBdev2 00:12:00.265 BaseBdev3 00:12:00.265 BaseBdev4' 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.265 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.523 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.523 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.523 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.523 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:00.523 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.523 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.523 [2024-12-06 04:02:53.651375] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:00.523 [2024-12-06 04:02:53.651409] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.523 [2024-12-06 04:02:53.651500] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.523 [2024-12-06 04:02:53.651575] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.523 [2024-12-06 04:02:53.651586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:00.523 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.523 04:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69457 00:12:00.523 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69457 ']' 00:12:00.523 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69457 00:12:00.523 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:00.523 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.523 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69457 00:12:00.523 killing process with pid 69457 00:12:00.523 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.523 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.523 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69457' 00:12:00.523 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69457 00:12:00.523 04:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69457 00:12:00.523 [2024-12-06 04:02:53.689651] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:00.782 [2024-12-06 04:02:54.122484] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:02.158 ************************************ 00:12:02.158 END TEST raid_state_function_test 00:12:02.158 ************************************ 00:12:02.158 04:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:02.158 00:12:02.158 real 0m11.688s 00:12:02.158 user 0m18.603s 00:12:02.158 sys 0m1.944s 00:12:02.158 04:02:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.158 04:02:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.158 04:02:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:12:02.158 04:02:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:02.158 04:02:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.158 04:02:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:02.158 ************************************ 00:12:02.158 START TEST raid_state_function_test_sb 00:12:02.158 ************************************ 00:12:02.158 04:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:12:02.158 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:02.158 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:02.158 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:02.158 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:02.158 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:02.158 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:02.159 Process raid pid: 70129 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70129 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70129' 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70129 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70129 ']' 00:12:02.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.159 04:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:02.159 [2024-12-06 04:02:55.464419] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:12:02.159 [2024-12-06 04:02:55.464637] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.418 [2024-12-06 04:02:55.641808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.418 [2024-12-06 04:02:55.770379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.677 [2024-12-06 04:02:55.993197] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.677 [2024-12-06 04:02:55.993336] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.245 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:03.245 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:03.245 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:03.245 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.245 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.245 [2024-12-06 04:02:56.328751] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:03.245 [2024-12-06 04:02:56.328888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:03.245 [2024-12-06 04:02:56.328905] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:03.245 [2024-12-06 04:02:56.328917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:03.245 [2024-12-06 04:02:56.328924] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:03.245 [2024-12-06 04:02:56.328935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:03.245 [2024-12-06 04:02:56.328942] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:03.245 [2024-12-06 04:02:56.328952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:03.245 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.245 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:03.245 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.245 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.245 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:03.245 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.245 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.245 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.245 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.245 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.245 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.245 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.245 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.245 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.245 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.245 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.245 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.245 "name": "Existed_Raid", 00:12:03.245 "uuid": "a17f5692-0c65-41bd-bad9-2edfa1060101", 00:12:03.245 "strip_size_kb": 64, 00:12:03.245 "state": "configuring", 00:12:03.245 "raid_level": "raid0", 00:12:03.245 "superblock": true, 00:12:03.245 "num_base_bdevs": 4, 00:12:03.245 "num_base_bdevs_discovered": 0, 00:12:03.245 "num_base_bdevs_operational": 4, 00:12:03.245 "base_bdevs_list": [ 00:12:03.245 { 00:12:03.245 "name": "BaseBdev1", 00:12:03.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.245 "is_configured": false, 00:12:03.245 "data_offset": 0, 00:12:03.245 "data_size": 0 00:12:03.245 }, 00:12:03.245 { 00:12:03.245 "name": "BaseBdev2", 00:12:03.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.245 "is_configured": false, 00:12:03.245 "data_offset": 0, 00:12:03.245 "data_size": 0 00:12:03.245 }, 00:12:03.245 { 00:12:03.245 "name": "BaseBdev3", 00:12:03.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.245 "is_configured": false, 00:12:03.245 "data_offset": 0, 00:12:03.245 "data_size": 0 00:12:03.245 }, 00:12:03.245 { 00:12:03.245 "name": "BaseBdev4", 00:12:03.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.245 "is_configured": false, 00:12:03.245 "data_offset": 0, 00:12:03.245 "data_size": 0 00:12:03.245 } 00:12:03.245 ] 00:12:03.246 }' 00:12:03.246 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.246 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.505 [2024-12-06 04:02:56.791883] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:03.505 [2024-12-06 04:02:56.791925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.505 [2024-12-06 04:02:56.799865] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:03.505 [2024-12-06 04:02:56.799908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:03.505 [2024-12-06 04:02:56.799917] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:03.505 [2024-12-06 04:02:56.800072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:03.505 [2024-12-06 04:02:56.800080] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:03.505 [2024-12-06 04:02:56.800089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:03.505 [2024-12-06 04:02:56.800095] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:03.505 [2024-12-06 04:02:56.800104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.505 [2024-12-06 04:02:56.844382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.505 BaseBdev1 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.505 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.763 [ 00:12:03.763 { 00:12:03.763 "name": "BaseBdev1", 00:12:03.763 "aliases": [ 00:12:03.763 "fe85fc05-792f-498c-bcb1-958f10eef091" 00:12:03.764 ], 00:12:03.764 "product_name": "Malloc disk", 00:12:03.764 "block_size": 512, 00:12:03.764 "num_blocks": 65536, 00:12:03.764 "uuid": "fe85fc05-792f-498c-bcb1-958f10eef091", 00:12:03.764 "assigned_rate_limits": { 00:12:03.764 "rw_ios_per_sec": 0, 00:12:03.764 "rw_mbytes_per_sec": 0, 00:12:03.764 "r_mbytes_per_sec": 0, 00:12:03.764 "w_mbytes_per_sec": 0 00:12:03.764 }, 00:12:03.764 "claimed": true, 00:12:03.764 "claim_type": "exclusive_write", 00:12:03.764 "zoned": false, 00:12:03.764 "supported_io_types": { 00:12:03.764 "read": true, 00:12:03.764 "write": true, 00:12:03.764 "unmap": true, 00:12:03.764 "flush": true, 00:12:03.764 "reset": true, 00:12:03.764 "nvme_admin": false, 00:12:03.764 "nvme_io": false, 00:12:03.764 "nvme_io_md": false, 00:12:03.764 "write_zeroes": true, 00:12:03.764 "zcopy": true, 00:12:03.764 "get_zone_info": false, 00:12:03.764 "zone_management": false, 00:12:03.764 "zone_append": false, 00:12:03.764 "compare": false, 00:12:03.764 "compare_and_write": false, 00:12:03.764 "abort": true, 00:12:03.764 "seek_hole": false, 00:12:03.764 "seek_data": false, 00:12:03.764 "copy": true, 00:12:03.764 "nvme_iov_md": false 00:12:03.764 }, 00:12:03.764 "memory_domains": [ 00:12:03.764 { 00:12:03.764 "dma_device_id": "system", 00:12:03.764 "dma_device_type": 1 00:12:03.764 }, 00:12:03.764 { 00:12:03.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.764 "dma_device_type": 2 00:12:03.764 } 00:12:03.764 ], 00:12:03.764 "driver_specific": {} 00:12:03.764 } 00:12:03.764 ] 00:12:03.764 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.764 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:03.764 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:03.764 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.764 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.764 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:03.764 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.764 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.764 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.764 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.764 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.764 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.764 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.764 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.764 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.764 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.764 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.764 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.764 "name": "Existed_Raid", 00:12:03.764 "uuid": "c263a7fb-869c-4b31-b665-dd0e9845bcd2", 00:12:03.764 "strip_size_kb": 64, 00:12:03.764 "state": "configuring", 00:12:03.764 "raid_level": "raid0", 00:12:03.764 "superblock": true, 00:12:03.764 "num_base_bdevs": 4, 00:12:03.764 "num_base_bdevs_discovered": 1, 00:12:03.764 "num_base_bdevs_operational": 4, 00:12:03.764 "base_bdevs_list": [ 00:12:03.764 { 00:12:03.764 "name": "BaseBdev1", 00:12:03.764 "uuid": "fe85fc05-792f-498c-bcb1-958f10eef091", 00:12:03.764 "is_configured": true, 00:12:03.764 "data_offset": 2048, 00:12:03.764 "data_size": 63488 00:12:03.764 }, 00:12:03.764 { 00:12:03.764 "name": "BaseBdev2", 00:12:03.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.764 "is_configured": false, 00:12:03.764 "data_offset": 0, 00:12:03.764 "data_size": 0 00:12:03.764 }, 00:12:03.764 { 00:12:03.764 "name": "BaseBdev3", 00:12:03.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.764 "is_configured": false, 00:12:03.764 "data_offset": 0, 00:12:03.764 "data_size": 0 00:12:03.764 }, 00:12:03.764 { 00:12:03.764 "name": "BaseBdev4", 00:12:03.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.764 "is_configured": false, 00:12:03.764 "data_offset": 0, 00:12:03.764 "data_size": 0 00:12:03.764 } 00:12:03.764 ] 00:12:03.764 }' 00:12:03.764 04:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.764 04:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.050 [2024-12-06 04:02:57.279684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:04.050 [2024-12-06 04:02:57.279817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.050 [2024-12-06 04:02:57.287732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:04.050 [2024-12-06 04:02:57.289772] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:04.050 [2024-12-06 04:02:57.289859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:04.050 [2024-12-06 04:02:57.289899] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:04.050 [2024-12-06 04:02:57.289929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:04.050 [2024-12-06 04:02:57.289988] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:04.050 [2024-12-06 04:02:57.290030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.050 "name": "Existed_Raid", 00:12:04.050 "uuid": "e3fafb01-f7a9-4b83-bdcd-dea0506f8f97", 00:12:04.050 "strip_size_kb": 64, 00:12:04.050 "state": "configuring", 00:12:04.050 "raid_level": "raid0", 00:12:04.050 "superblock": true, 00:12:04.050 "num_base_bdevs": 4, 00:12:04.050 "num_base_bdevs_discovered": 1, 00:12:04.050 "num_base_bdevs_operational": 4, 00:12:04.050 "base_bdevs_list": [ 00:12:04.050 { 00:12:04.050 "name": "BaseBdev1", 00:12:04.050 "uuid": "fe85fc05-792f-498c-bcb1-958f10eef091", 00:12:04.050 "is_configured": true, 00:12:04.050 "data_offset": 2048, 00:12:04.050 "data_size": 63488 00:12:04.050 }, 00:12:04.050 { 00:12:04.050 "name": "BaseBdev2", 00:12:04.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.050 "is_configured": false, 00:12:04.050 "data_offset": 0, 00:12:04.050 "data_size": 0 00:12:04.050 }, 00:12:04.050 { 00:12:04.050 "name": "BaseBdev3", 00:12:04.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.050 "is_configured": false, 00:12:04.050 "data_offset": 0, 00:12:04.050 "data_size": 0 00:12:04.050 }, 00:12:04.050 { 00:12:04.050 "name": "BaseBdev4", 00:12:04.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.050 "is_configured": false, 00:12:04.050 "data_offset": 0, 00:12:04.050 "data_size": 0 00:12:04.050 } 00:12:04.050 ] 00:12:04.050 }' 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.050 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.619 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:04.619 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.619 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.619 [2024-12-06 04:02:57.793123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:04.619 BaseBdev2 00:12:04.619 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.619 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:04.619 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:04.619 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:04.619 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:04.619 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.620 [ 00:12:04.620 { 00:12:04.620 "name": "BaseBdev2", 00:12:04.620 "aliases": [ 00:12:04.620 "55ba3b8a-2d5c-498c-987a-ca38b6be716c" 00:12:04.620 ], 00:12:04.620 "product_name": "Malloc disk", 00:12:04.620 "block_size": 512, 00:12:04.620 "num_blocks": 65536, 00:12:04.620 "uuid": "55ba3b8a-2d5c-498c-987a-ca38b6be716c", 00:12:04.620 "assigned_rate_limits": { 00:12:04.620 "rw_ios_per_sec": 0, 00:12:04.620 "rw_mbytes_per_sec": 0, 00:12:04.620 "r_mbytes_per_sec": 0, 00:12:04.620 "w_mbytes_per_sec": 0 00:12:04.620 }, 00:12:04.620 "claimed": true, 00:12:04.620 "claim_type": "exclusive_write", 00:12:04.620 "zoned": false, 00:12:04.620 "supported_io_types": { 00:12:04.620 "read": true, 00:12:04.620 "write": true, 00:12:04.620 "unmap": true, 00:12:04.620 "flush": true, 00:12:04.620 "reset": true, 00:12:04.620 "nvme_admin": false, 00:12:04.620 "nvme_io": false, 00:12:04.620 "nvme_io_md": false, 00:12:04.620 "write_zeroes": true, 00:12:04.620 "zcopy": true, 00:12:04.620 "get_zone_info": false, 00:12:04.620 "zone_management": false, 00:12:04.620 "zone_append": false, 00:12:04.620 "compare": false, 00:12:04.620 "compare_and_write": false, 00:12:04.620 "abort": true, 00:12:04.620 "seek_hole": false, 00:12:04.620 "seek_data": false, 00:12:04.620 "copy": true, 00:12:04.620 "nvme_iov_md": false 00:12:04.620 }, 00:12:04.620 "memory_domains": [ 00:12:04.620 { 00:12:04.620 "dma_device_id": "system", 00:12:04.620 "dma_device_type": 1 00:12:04.620 }, 00:12:04.620 { 00:12:04.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.620 "dma_device_type": 2 00:12:04.620 } 00:12:04.620 ], 00:12:04.620 "driver_specific": {} 00:12:04.620 } 00:12:04.620 ] 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.620 "name": "Existed_Raid", 00:12:04.620 "uuid": "e3fafb01-f7a9-4b83-bdcd-dea0506f8f97", 00:12:04.620 "strip_size_kb": 64, 00:12:04.620 "state": "configuring", 00:12:04.620 "raid_level": "raid0", 00:12:04.620 "superblock": true, 00:12:04.620 "num_base_bdevs": 4, 00:12:04.620 "num_base_bdevs_discovered": 2, 00:12:04.620 "num_base_bdevs_operational": 4, 00:12:04.620 "base_bdevs_list": [ 00:12:04.620 { 00:12:04.620 "name": "BaseBdev1", 00:12:04.620 "uuid": "fe85fc05-792f-498c-bcb1-958f10eef091", 00:12:04.620 "is_configured": true, 00:12:04.620 "data_offset": 2048, 00:12:04.620 "data_size": 63488 00:12:04.620 }, 00:12:04.620 { 00:12:04.620 "name": "BaseBdev2", 00:12:04.620 "uuid": "55ba3b8a-2d5c-498c-987a-ca38b6be716c", 00:12:04.620 "is_configured": true, 00:12:04.620 "data_offset": 2048, 00:12:04.620 "data_size": 63488 00:12:04.620 }, 00:12:04.620 { 00:12:04.620 "name": "BaseBdev3", 00:12:04.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.620 "is_configured": false, 00:12:04.620 "data_offset": 0, 00:12:04.620 "data_size": 0 00:12:04.620 }, 00:12:04.620 { 00:12:04.620 "name": "BaseBdev4", 00:12:04.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.620 "is_configured": false, 00:12:04.620 "data_offset": 0, 00:12:04.620 "data_size": 0 00:12:04.620 } 00:12:04.620 ] 00:12:04.620 }' 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.620 04:02:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.190 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:05.190 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.190 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.190 [2024-12-06 04:02:58.330024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:05.190 BaseBdev3 00:12:05.190 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.190 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:05.190 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:05.190 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.190 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:05.190 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.190 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.190 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.190 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.190 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.190 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.190 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:05.190 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.190 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.190 [ 00:12:05.190 { 00:12:05.190 "name": "BaseBdev3", 00:12:05.190 "aliases": [ 00:12:05.190 "c4dfbd9a-c008-4087-aab2-ff882e4bc1c7" 00:12:05.190 ], 00:12:05.190 "product_name": "Malloc disk", 00:12:05.190 "block_size": 512, 00:12:05.190 "num_blocks": 65536, 00:12:05.190 "uuid": "c4dfbd9a-c008-4087-aab2-ff882e4bc1c7", 00:12:05.190 "assigned_rate_limits": { 00:12:05.190 "rw_ios_per_sec": 0, 00:12:05.190 "rw_mbytes_per_sec": 0, 00:12:05.190 "r_mbytes_per_sec": 0, 00:12:05.190 "w_mbytes_per_sec": 0 00:12:05.190 }, 00:12:05.190 "claimed": true, 00:12:05.190 "claim_type": "exclusive_write", 00:12:05.190 "zoned": false, 00:12:05.190 "supported_io_types": { 00:12:05.190 "read": true, 00:12:05.190 "write": true, 00:12:05.190 "unmap": true, 00:12:05.190 "flush": true, 00:12:05.190 "reset": true, 00:12:05.190 "nvme_admin": false, 00:12:05.190 "nvme_io": false, 00:12:05.190 "nvme_io_md": false, 00:12:05.190 "write_zeroes": true, 00:12:05.190 "zcopy": true, 00:12:05.190 "get_zone_info": false, 00:12:05.190 "zone_management": false, 00:12:05.190 "zone_append": false, 00:12:05.190 "compare": false, 00:12:05.190 "compare_and_write": false, 00:12:05.190 "abort": true, 00:12:05.190 "seek_hole": false, 00:12:05.190 "seek_data": false, 00:12:05.190 "copy": true, 00:12:05.190 "nvme_iov_md": false 00:12:05.190 }, 00:12:05.190 "memory_domains": [ 00:12:05.190 { 00:12:05.190 "dma_device_id": "system", 00:12:05.190 "dma_device_type": 1 00:12:05.190 }, 00:12:05.190 { 00:12:05.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.190 "dma_device_type": 2 00:12:05.190 } 00:12:05.190 ], 00:12:05.190 "driver_specific": {} 00:12:05.190 } 00:12:05.190 ] 00:12:05.191 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.191 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:05.191 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:05.191 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:05.191 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:05.191 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.191 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.191 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:05.191 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.191 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.191 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.191 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.191 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.191 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.191 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.191 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.191 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.191 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.191 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.191 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.191 "name": "Existed_Raid", 00:12:05.191 "uuid": "e3fafb01-f7a9-4b83-bdcd-dea0506f8f97", 00:12:05.191 "strip_size_kb": 64, 00:12:05.191 "state": "configuring", 00:12:05.191 "raid_level": "raid0", 00:12:05.191 "superblock": true, 00:12:05.191 "num_base_bdevs": 4, 00:12:05.191 "num_base_bdevs_discovered": 3, 00:12:05.191 "num_base_bdevs_operational": 4, 00:12:05.191 "base_bdevs_list": [ 00:12:05.191 { 00:12:05.191 "name": "BaseBdev1", 00:12:05.191 "uuid": "fe85fc05-792f-498c-bcb1-958f10eef091", 00:12:05.191 "is_configured": true, 00:12:05.191 "data_offset": 2048, 00:12:05.191 "data_size": 63488 00:12:05.191 }, 00:12:05.191 { 00:12:05.191 "name": "BaseBdev2", 00:12:05.191 "uuid": "55ba3b8a-2d5c-498c-987a-ca38b6be716c", 00:12:05.191 "is_configured": true, 00:12:05.191 "data_offset": 2048, 00:12:05.191 "data_size": 63488 00:12:05.191 }, 00:12:05.191 { 00:12:05.191 "name": "BaseBdev3", 00:12:05.191 "uuid": "c4dfbd9a-c008-4087-aab2-ff882e4bc1c7", 00:12:05.191 "is_configured": true, 00:12:05.191 "data_offset": 2048, 00:12:05.191 "data_size": 63488 00:12:05.191 }, 00:12:05.191 { 00:12:05.191 "name": "BaseBdev4", 00:12:05.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.191 "is_configured": false, 00:12:05.191 "data_offset": 0, 00:12:05.191 "data_size": 0 00:12:05.191 } 00:12:05.191 ] 00:12:05.191 }' 00:12:05.191 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.191 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.451 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:05.451 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.451 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.712 [2024-12-06 04:02:58.808591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:05.712 [2024-12-06 04:02:58.809014] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:05.712 [2024-12-06 04:02:58.809099] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:05.712 [2024-12-06 04:02:58.809453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:05.712 BaseBdev4 00:12:05.712 [2024-12-06 04:02:58.809672] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:05.712 [2024-12-06 04:02:58.809738] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:05.712 [2024-12-06 04:02:58.809910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.712 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.712 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:05.712 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:05.712 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.712 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:05.712 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.712 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.712 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.712 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.712 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.712 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.712 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:05.712 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.712 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.712 [ 00:12:05.712 { 00:12:05.712 "name": "BaseBdev4", 00:12:05.712 "aliases": [ 00:12:05.712 "f4c3af0a-d81d-4c41-9d99-6828a271a70c" 00:12:05.712 ], 00:12:05.712 "product_name": "Malloc disk", 00:12:05.712 "block_size": 512, 00:12:05.712 "num_blocks": 65536, 00:12:05.712 "uuid": "f4c3af0a-d81d-4c41-9d99-6828a271a70c", 00:12:05.712 "assigned_rate_limits": { 00:12:05.712 "rw_ios_per_sec": 0, 00:12:05.712 "rw_mbytes_per_sec": 0, 00:12:05.712 "r_mbytes_per_sec": 0, 00:12:05.712 "w_mbytes_per_sec": 0 00:12:05.712 }, 00:12:05.712 "claimed": true, 00:12:05.712 "claim_type": "exclusive_write", 00:12:05.712 "zoned": false, 00:12:05.712 "supported_io_types": { 00:12:05.712 "read": true, 00:12:05.712 "write": true, 00:12:05.712 "unmap": true, 00:12:05.712 "flush": true, 00:12:05.712 "reset": true, 00:12:05.712 "nvme_admin": false, 00:12:05.712 "nvme_io": false, 00:12:05.712 "nvme_io_md": false, 00:12:05.712 "write_zeroes": true, 00:12:05.712 "zcopy": true, 00:12:05.712 "get_zone_info": false, 00:12:05.712 "zone_management": false, 00:12:05.712 "zone_append": false, 00:12:05.712 "compare": false, 00:12:05.712 "compare_and_write": false, 00:12:05.712 "abort": true, 00:12:05.712 "seek_hole": false, 00:12:05.712 "seek_data": false, 00:12:05.712 "copy": true, 00:12:05.712 "nvme_iov_md": false 00:12:05.712 }, 00:12:05.712 "memory_domains": [ 00:12:05.712 { 00:12:05.712 "dma_device_id": "system", 00:12:05.712 "dma_device_type": 1 00:12:05.712 }, 00:12:05.712 { 00:12:05.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.712 "dma_device_type": 2 00:12:05.712 } 00:12:05.712 ], 00:12:05.712 "driver_specific": {} 00:12:05.712 } 00:12:05.712 ] 00:12:05.712 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.712 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:05.712 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:05.713 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:05.713 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:05.713 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.713 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.713 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:05.713 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.713 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.713 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.713 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.713 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.713 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.713 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.713 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.713 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.713 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.713 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.713 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.713 "name": "Existed_Raid", 00:12:05.713 "uuid": "e3fafb01-f7a9-4b83-bdcd-dea0506f8f97", 00:12:05.713 "strip_size_kb": 64, 00:12:05.713 "state": "online", 00:12:05.713 "raid_level": "raid0", 00:12:05.713 "superblock": true, 00:12:05.713 "num_base_bdevs": 4, 00:12:05.713 "num_base_bdevs_discovered": 4, 00:12:05.713 "num_base_bdevs_operational": 4, 00:12:05.713 "base_bdevs_list": [ 00:12:05.713 { 00:12:05.713 "name": "BaseBdev1", 00:12:05.713 "uuid": "fe85fc05-792f-498c-bcb1-958f10eef091", 00:12:05.713 "is_configured": true, 00:12:05.713 "data_offset": 2048, 00:12:05.713 "data_size": 63488 00:12:05.713 }, 00:12:05.713 { 00:12:05.713 "name": "BaseBdev2", 00:12:05.713 "uuid": "55ba3b8a-2d5c-498c-987a-ca38b6be716c", 00:12:05.713 "is_configured": true, 00:12:05.713 "data_offset": 2048, 00:12:05.713 "data_size": 63488 00:12:05.713 }, 00:12:05.713 { 00:12:05.713 "name": "BaseBdev3", 00:12:05.713 "uuid": "c4dfbd9a-c008-4087-aab2-ff882e4bc1c7", 00:12:05.713 "is_configured": true, 00:12:05.713 "data_offset": 2048, 00:12:05.713 "data_size": 63488 00:12:05.713 }, 00:12:05.713 { 00:12:05.713 "name": "BaseBdev4", 00:12:05.713 "uuid": "f4c3af0a-d81d-4c41-9d99-6828a271a70c", 00:12:05.713 "is_configured": true, 00:12:05.713 "data_offset": 2048, 00:12:05.713 "data_size": 63488 00:12:05.713 } 00:12:05.713 ] 00:12:05.713 }' 00:12:05.713 04:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.713 04:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.972 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:05.972 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:05.972 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.972 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.972 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.972 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.972 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:05.972 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:05.972 04:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.972 04:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.972 [2024-12-06 04:02:59.320311] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.232 04:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.232 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:06.232 "name": "Existed_Raid", 00:12:06.232 "aliases": [ 00:12:06.232 "e3fafb01-f7a9-4b83-bdcd-dea0506f8f97" 00:12:06.232 ], 00:12:06.232 "product_name": "Raid Volume", 00:12:06.232 "block_size": 512, 00:12:06.232 "num_blocks": 253952, 00:12:06.232 "uuid": "e3fafb01-f7a9-4b83-bdcd-dea0506f8f97", 00:12:06.232 "assigned_rate_limits": { 00:12:06.232 "rw_ios_per_sec": 0, 00:12:06.232 "rw_mbytes_per_sec": 0, 00:12:06.232 "r_mbytes_per_sec": 0, 00:12:06.232 "w_mbytes_per_sec": 0 00:12:06.232 }, 00:12:06.232 "claimed": false, 00:12:06.232 "zoned": false, 00:12:06.232 "supported_io_types": { 00:12:06.232 "read": true, 00:12:06.232 "write": true, 00:12:06.232 "unmap": true, 00:12:06.232 "flush": true, 00:12:06.232 "reset": true, 00:12:06.232 "nvme_admin": false, 00:12:06.232 "nvme_io": false, 00:12:06.232 "nvme_io_md": false, 00:12:06.232 "write_zeroes": true, 00:12:06.232 "zcopy": false, 00:12:06.232 "get_zone_info": false, 00:12:06.232 "zone_management": false, 00:12:06.232 "zone_append": false, 00:12:06.232 "compare": false, 00:12:06.232 "compare_and_write": false, 00:12:06.232 "abort": false, 00:12:06.232 "seek_hole": false, 00:12:06.232 "seek_data": false, 00:12:06.232 "copy": false, 00:12:06.232 "nvme_iov_md": false 00:12:06.232 }, 00:12:06.232 "memory_domains": [ 00:12:06.232 { 00:12:06.232 "dma_device_id": "system", 00:12:06.232 "dma_device_type": 1 00:12:06.232 }, 00:12:06.232 { 00:12:06.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.232 "dma_device_type": 2 00:12:06.232 }, 00:12:06.232 { 00:12:06.232 "dma_device_id": "system", 00:12:06.232 "dma_device_type": 1 00:12:06.232 }, 00:12:06.232 { 00:12:06.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.232 "dma_device_type": 2 00:12:06.232 }, 00:12:06.232 { 00:12:06.232 "dma_device_id": "system", 00:12:06.232 "dma_device_type": 1 00:12:06.232 }, 00:12:06.232 { 00:12:06.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.232 "dma_device_type": 2 00:12:06.232 }, 00:12:06.232 { 00:12:06.232 "dma_device_id": "system", 00:12:06.232 "dma_device_type": 1 00:12:06.232 }, 00:12:06.232 { 00:12:06.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.233 "dma_device_type": 2 00:12:06.233 } 00:12:06.233 ], 00:12:06.233 "driver_specific": { 00:12:06.233 "raid": { 00:12:06.233 "uuid": "e3fafb01-f7a9-4b83-bdcd-dea0506f8f97", 00:12:06.233 "strip_size_kb": 64, 00:12:06.233 "state": "online", 00:12:06.233 "raid_level": "raid0", 00:12:06.233 "superblock": true, 00:12:06.233 "num_base_bdevs": 4, 00:12:06.233 "num_base_bdevs_discovered": 4, 00:12:06.233 "num_base_bdevs_operational": 4, 00:12:06.233 "base_bdevs_list": [ 00:12:06.233 { 00:12:06.233 "name": "BaseBdev1", 00:12:06.233 "uuid": "fe85fc05-792f-498c-bcb1-958f10eef091", 00:12:06.233 "is_configured": true, 00:12:06.233 "data_offset": 2048, 00:12:06.233 "data_size": 63488 00:12:06.233 }, 00:12:06.233 { 00:12:06.233 "name": "BaseBdev2", 00:12:06.233 "uuid": "55ba3b8a-2d5c-498c-987a-ca38b6be716c", 00:12:06.233 "is_configured": true, 00:12:06.233 "data_offset": 2048, 00:12:06.233 "data_size": 63488 00:12:06.233 }, 00:12:06.233 { 00:12:06.233 "name": "BaseBdev3", 00:12:06.233 "uuid": "c4dfbd9a-c008-4087-aab2-ff882e4bc1c7", 00:12:06.233 "is_configured": true, 00:12:06.233 "data_offset": 2048, 00:12:06.233 "data_size": 63488 00:12:06.233 }, 00:12:06.233 { 00:12:06.233 "name": "BaseBdev4", 00:12:06.233 "uuid": "f4c3af0a-d81d-4c41-9d99-6828a271a70c", 00:12:06.233 "is_configured": true, 00:12:06.233 "data_offset": 2048, 00:12:06.233 "data_size": 63488 00:12:06.233 } 00:12:06.233 ] 00:12:06.233 } 00:12:06.233 } 00:12:06.233 }' 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:06.233 BaseBdev2 00:12:06.233 BaseBdev3 00:12:06.233 BaseBdev4' 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.233 04:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.493 [2024-12-06 04:02:59.595506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:06.493 [2024-12-06 04:02:59.595536] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:06.493 [2024-12-06 04:02:59.595587] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.493 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.493 "name": "Existed_Raid", 00:12:06.493 "uuid": "e3fafb01-f7a9-4b83-bdcd-dea0506f8f97", 00:12:06.493 "strip_size_kb": 64, 00:12:06.493 "state": "offline", 00:12:06.493 "raid_level": "raid0", 00:12:06.493 "superblock": true, 00:12:06.493 "num_base_bdevs": 4, 00:12:06.493 "num_base_bdevs_discovered": 3, 00:12:06.493 "num_base_bdevs_operational": 3, 00:12:06.493 "base_bdevs_list": [ 00:12:06.493 { 00:12:06.493 "name": null, 00:12:06.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.493 "is_configured": false, 00:12:06.493 "data_offset": 0, 00:12:06.493 "data_size": 63488 00:12:06.493 }, 00:12:06.493 { 00:12:06.493 "name": "BaseBdev2", 00:12:06.493 "uuid": "55ba3b8a-2d5c-498c-987a-ca38b6be716c", 00:12:06.493 "is_configured": true, 00:12:06.493 "data_offset": 2048, 00:12:06.493 "data_size": 63488 00:12:06.493 }, 00:12:06.493 { 00:12:06.493 "name": "BaseBdev3", 00:12:06.493 "uuid": "c4dfbd9a-c008-4087-aab2-ff882e4bc1c7", 00:12:06.493 "is_configured": true, 00:12:06.493 "data_offset": 2048, 00:12:06.493 "data_size": 63488 00:12:06.493 }, 00:12:06.493 { 00:12:06.494 "name": "BaseBdev4", 00:12:06.494 "uuid": "f4c3af0a-d81d-4c41-9d99-6828a271a70c", 00:12:06.494 "is_configured": true, 00:12:06.494 "data_offset": 2048, 00:12:06.494 "data_size": 63488 00:12:06.494 } 00:12:06.494 ] 00:12:06.494 }' 00:12:06.494 04:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.494 04:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.062 [2024-12-06 04:03:00.222210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.062 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.062 [2024-12-06 04:03:00.382634] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:07.321 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.321 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:07.321 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:07.321 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:07.321 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.321 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.321 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.321 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.321 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:07.321 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:07.321 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:07.321 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.321 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.321 [2024-12-06 04:03:00.549257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:07.321 [2024-12-06 04:03:00.549373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:07.321 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.321 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:07.321 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:07.321 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.321 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.321 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:07.321 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.581 BaseBdev2 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.581 [ 00:12:07.581 { 00:12:07.581 "name": "BaseBdev2", 00:12:07.581 "aliases": [ 00:12:07.581 "71b93141-7375-4ece-af3a-c2faedb776f0" 00:12:07.581 ], 00:12:07.581 "product_name": "Malloc disk", 00:12:07.581 "block_size": 512, 00:12:07.581 "num_blocks": 65536, 00:12:07.581 "uuid": "71b93141-7375-4ece-af3a-c2faedb776f0", 00:12:07.581 "assigned_rate_limits": { 00:12:07.581 "rw_ios_per_sec": 0, 00:12:07.581 "rw_mbytes_per_sec": 0, 00:12:07.581 "r_mbytes_per_sec": 0, 00:12:07.581 "w_mbytes_per_sec": 0 00:12:07.581 }, 00:12:07.581 "claimed": false, 00:12:07.581 "zoned": false, 00:12:07.581 "supported_io_types": { 00:12:07.581 "read": true, 00:12:07.581 "write": true, 00:12:07.581 "unmap": true, 00:12:07.581 "flush": true, 00:12:07.581 "reset": true, 00:12:07.581 "nvme_admin": false, 00:12:07.581 "nvme_io": false, 00:12:07.581 "nvme_io_md": false, 00:12:07.581 "write_zeroes": true, 00:12:07.581 "zcopy": true, 00:12:07.581 "get_zone_info": false, 00:12:07.581 "zone_management": false, 00:12:07.581 "zone_append": false, 00:12:07.581 "compare": false, 00:12:07.581 "compare_and_write": false, 00:12:07.581 "abort": true, 00:12:07.581 "seek_hole": false, 00:12:07.581 "seek_data": false, 00:12:07.581 "copy": true, 00:12:07.581 "nvme_iov_md": false 00:12:07.581 }, 00:12:07.581 "memory_domains": [ 00:12:07.581 { 00:12:07.581 "dma_device_id": "system", 00:12:07.581 "dma_device_type": 1 00:12:07.581 }, 00:12:07.581 { 00:12:07.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.581 "dma_device_type": 2 00:12:07.581 } 00:12:07.581 ], 00:12:07.581 "driver_specific": {} 00:12:07.581 } 00:12:07.581 ] 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.581 BaseBdev3 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.581 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.581 [ 00:12:07.581 { 00:12:07.581 "name": "BaseBdev3", 00:12:07.581 "aliases": [ 00:12:07.581 "b499184d-a4b1-4968-b342-ab9f62de9e83" 00:12:07.581 ], 00:12:07.581 "product_name": "Malloc disk", 00:12:07.581 "block_size": 512, 00:12:07.581 "num_blocks": 65536, 00:12:07.581 "uuid": "b499184d-a4b1-4968-b342-ab9f62de9e83", 00:12:07.581 "assigned_rate_limits": { 00:12:07.581 "rw_ios_per_sec": 0, 00:12:07.581 "rw_mbytes_per_sec": 0, 00:12:07.581 "r_mbytes_per_sec": 0, 00:12:07.581 "w_mbytes_per_sec": 0 00:12:07.581 }, 00:12:07.581 "claimed": false, 00:12:07.581 "zoned": false, 00:12:07.581 "supported_io_types": { 00:12:07.581 "read": true, 00:12:07.581 "write": true, 00:12:07.581 "unmap": true, 00:12:07.581 "flush": true, 00:12:07.581 "reset": true, 00:12:07.581 "nvme_admin": false, 00:12:07.581 "nvme_io": false, 00:12:07.582 "nvme_io_md": false, 00:12:07.582 "write_zeroes": true, 00:12:07.582 "zcopy": true, 00:12:07.582 "get_zone_info": false, 00:12:07.582 "zone_management": false, 00:12:07.582 "zone_append": false, 00:12:07.582 "compare": false, 00:12:07.582 "compare_and_write": false, 00:12:07.582 "abort": true, 00:12:07.582 "seek_hole": false, 00:12:07.582 "seek_data": false, 00:12:07.582 "copy": true, 00:12:07.582 "nvme_iov_md": false 00:12:07.582 }, 00:12:07.582 "memory_domains": [ 00:12:07.582 { 00:12:07.582 "dma_device_id": "system", 00:12:07.582 "dma_device_type": 1 00:12:07.582 }, 00:12:07.582 { 00:12:07.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.582 "dma_device_type": 2 00:12:07.582 } 00:12:07.582 ], 00:12:07.582 "driver_specific": {} 00:12:07.582 } 00:12:07.582 ] 00:12:07.582 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.582 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:07.582 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:07.582 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:07.582 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:07.582 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.582 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.582 BaseBdev4 00:12:07.582 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.582 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:07.582 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:07.582 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:07.582 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:07.582 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:07.582 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:07.582 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:07.582 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.582 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.582 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.582 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:07.582 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.582 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.842 [ 00:12:07.842 { 00:12:07.842 "name": "BaseBdev4", 00:12:07.842 "aliases": [ 00:12:07.842 "bafc4ab7-908b-4163-a2a7-675d8195e1de" 00:12:07.842 ], 00:12:07.842 "product_name": "Malloc disk", 00:12:07.842 "block_size": 512, 00:12:07.842 "num_blocks": 65536, 00:12:07.842 "uuid": "bafc4ab7-908b-4163-a2a7-675d8195e1de", 00:12:07.842 "assigned_rate_limits": { 00:12:07.842 "rw_ios_per_sec": 0, 00:12:07.842 "rw_mbytes_per_sec": 0, 00:12:07.842 "r_mbytes_per_sec": 0, 00:12:07.842 "w_mbytes_per_sec": 0 00:12:07.842 }, 00:12:07.842 "claimed": false, 00:12:07.842 "zoned": false, 00:12:07.842 "supported_io_types": { 00:12:07.842 "read": true, 00:12:07.842 "write": true, 00:12:07.842 "unmap": true, 00:12:07.843 "flush": true, 00:12:07.843 "reset": true, 00:12:07.843 "nvme_admin": false, 00:12:07.843 "nvme_io": false, 00:12:07.843 "nvme_io_md": false, 00:12:07.843 "write_zeroes": true, 00:12:07.843 "zcopy": true, 00:12:07.843 "get_zone_info": false, 00:12:07.843 "zone_management": false, 00:12:07.843 "zone_append": false, 00:12:07.843 "compare": false, 00:12:07.843 "compare_and_write": false, 00:12:07.843 "abort": true, 00:12:07.843 "seek_hole": false, 00:12:07.843 "seek_data": false, 00:12:07.843 "copy": true, 00:12:07.843 "nvme_iov_md": false 00:12:07.843 }, 00:12:07.843 "memory_domains": [ 00:12:07.843 { 00:12:07.843 "dma_device_id": "system", 00:12:07.843 "dma_device_type": 1 00:12:07.843 }, 00:12:07.843 { 00:12:07.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.843 "dma_device_type": 2 00:12:07.843 } 00:12:07.843 ], 00:12:07.843 "driver_specific": {} 00:12:07.843 } 00:12:07.843 ] 00:12:07.843 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.843 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:07.843 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:07.843 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:07.843 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:07.843 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.843 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.843 [2024-12-06 04:03:00.961479] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:07.843 [2024-12-06 04:03:00.961586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:07.843 [2024-12-06 04:03:00.961634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:07.843 [2024-12-06 04:03:00.963724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:07.843 [2024-12-06 04:03:00.963826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:07.843 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.843 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:07.843 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.843 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.843 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:07.843 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.843 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.843 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.843 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.843 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.843 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.843 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.843 04:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.843 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.843 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.843 04:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.843 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.843 "name": "Existed_Raid", 00:12:07.843 "uuid": "43929488-975e-4196-bd0c-cd5781ccf7fa", 00:12:07.843 "strip_size_kb": 64, 00:12:07.843 "state": "configuring", 00:12:07.843 "raid_level": "raid0", 00:12:07.843 "superblock": true, 00:12:07.843 "num_base_bdevs": 4, 00:12:07.843 "num_base_bdevs_discovered": 3, 00:12:07.843 "num_base_bdevs_operational": 4, 00:12:07.843 "base_bdevs_list": [ 00:12:07.843 { 00:12:07.843 "name": "BaseBdev1", 00:12:07.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.843 "is_configured": false, 00:12:07.843 "data_offset": 0, 00:12:07.843 "data_size": 0 00:12:07.843 }, 00:12:07.843 { 00:12:07.843 "name": "BaseBdev2", 00:12:07.843 "uuid": "71b93141-7375-4ece-af3a-c2faedb776f0", 00:12:07.843 "is_configured": true, 00:12:07.843 "data_offset": 2048, 00:12:07.843 "data_size": 63488 00:12:07.843 }, 00:12:07.843 { 00:12:07.843 "name": "BaseBdev3", 00:12:07.843 "uuid": "b499184d-a4b1-4968-b342-ab9f62de9e83", 00:12:07.843 "is_configured": true, 00:12:07.843 "data_offset": 2048, 00:12:07.843 "data_size": 63488 00:12:07.843 }, 00:12:07.843 { 00:12:07.843 "name": "BaseBdev4", 00:12:07.843 "uuid": "bafc4ab7-908b-4163-a2a7-675d8195e1de", 00:12:07.843 "is_configured": true, 00:12:07.843 "data_offset": 2048, 00:12:07.843 "data_size": 63488 00:12:07.843 } 00:12:07.843 ] 00:12:07.843 }' 00:12:07.843 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.843 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.103 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:08.103 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.103 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.103 [2024-12-06 04:03:01.384784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:08.103 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.103 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:08.103 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.103 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.103 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.103 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.103 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.103 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.103 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.103 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.103 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.103 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.103 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.103 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.103 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.103 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.103 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.103 "name": "Existed_Raid", 00:12:08.103 "uuid": "43929488-975e-4196-bd0c-cd5781ccf7fa", 00:12:08.103 "strip_size_kb": 64, 00:12:08.103 "state": "configuring", 00:12:08.103 "raid_level": "raid0", 00:12:08.103 "superblock": true, 00:12:08.103 "num_base_bdevs": 4, 00:12:08.103 "num_base_bdevs_discovered": 2, 00:12:08.103 "num_base_bdevs_operational": 4, 00:12:08.103 "base_bdevs_list": [ 00:12:08.103 { 00:12:08.103 "name": "BaseBdev1", 00:12:08.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.103 "is_configured": false, 00:12:08.103 "data_offset": 0, 00:12:08.103 "data_size": 0 00:12:08.103 }, 00:12:08.103 { 00:12:08.103 "name": null, 00:12:08.103 "uuid": "71b93141-7375-4ece-af3a-c2faedb776f0", 00:12:08.103 "is_configured": false, 00:12:08.103 "data_offset": 0, 00:12:08.103 "data_size": 63488 00:12:08.103 }, 00:12:08.103 { 00:12:08.103 "name": "BaseBdev3", 00:12:08.103 "uuid": "b499184d-a4b1-4968-b342-ab9f62de9e83", 00:12:08.103 "is_configured": true, 00:12:08.103 "data_offset": 2048, 00:12:08.103 "data_size": 63488 00:12:08.103 }, 00:12:08.103 { 00:12:08.103 "name": "BaseBdev4", 00:12:08.103 "uuid": "bafc4ab7-908b-4163-a2a7-675d8195e1de", 00:12:08.103 "is_configured": true, 00:12:08.103 "data_offset": 2048, 00:12:08.103 "data_size": 63488 00:12:08.103 } 00:12:08.103 ] 00:12:08.103 }' 00:12:08.103 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.103 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.673 [2024-12-06 04:03:01.960487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:08.673 BaseBdev1 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.673 [ 00:12:08.673 { 00:12:08.673 "name": "BaseBdev1", 00:12:08.673 "aliases": [ 00:12:08.673 "e988b6d4-6584-4384-b7be-41ef96cb0b20" 00:12:08.673 ], 00:12:08.673 "product_name": "Malloc disk", 00:12:08.673 "block_size": 512, 00:12:08.673 "num_blocks": 65536, 00:12:08.673 "uuid": "e988b6d4-6584-4384-b7be-41ef96cb0b20", 00:12:08.673 "assigned_rate_limits": { 00:12:08.673 "rw_ios_per_sec": 0, 00:12:08.673 "rw_mbytes_per_sec": 0, 00:12:08.673 "r_mbytes_per_sec": 0, 00:12:08.673 "w_mbytes_per_sec": 0 00:12:08.673 }, 00:12:08.673 "claimed": true, 00:12:08.673 "claim_type": "exclusive_write", 00:12:08.673 "zoned": false, 00:12:08.673 "supported_io_types": { 00:12:08.673 "read": true, 00:12:08.673 "write": true, 00:12:08.673 "unmap": true, 00:12:08.673 "flush": true, 00:12:08.673 "reset": true, 00:12:08.673 "nvme_admin": false, 00:12:08.673 "nvme_io": false, 00:12:08.673 "nvme_io_md": false, 00:12:08.673 "write_zeroes": true, 00:12:08.673 "zcopy": true, 00:12:08.673 "get_zone_info": false, 00:12:08.673 "zone_management": false, 00:12:08.673 "zone_append": false, 00:12:08.673 "compare": false, 00:12:08.673 "compare_and_write": false, 00:12:08.673 "abort": true, 00:12:08.673 "seek_hole": false, 00:12:08.673 "seek_data": false, 00:12:08.673 "copy": true, 00:12:08.673 "nvme_iov_md": false 00:12:08.673 }, 00:12:08.673 "memory_domains": [ 00:12:08.673 { 00:12:08.673 "dma_device_id": "system", 00:12:08.673 "dma_device_type": 1 00:12:08.673 }, 00:12:08.673 { 00:12:08.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.673 "dma_device_type": 2 00:12:08.673 } 00:12:08.673 ], 00:12:08.673 "driver_specific": {} 00:12:08.673 } 00:12:08.673 ] 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.673 04:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.673 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.674 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.674 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.674 04:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.674 04:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.674 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.674 04:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.932 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.932 "name": "Existed_Raid", 00:12:08.932 "uuid": "43929488-975e-4196-bd0c-cd5781ccf7fa", 00:12:08.932 "strip_size_kb": 64, 00:12:08.932 "state": "configuring", 00:12:08.932 "raid_level": "raid0", 00:12:08.932 "superblock": true, 00:12:08.932 "num_base_bdevs": 4, 00:12:08.932 "num_base_bdevs_discovered": 3, 00:12:08.932 "num_base_bdevs_operational": 4, 00:12:08.932 "base_bdevs_list": [ 00:12:08.932 { 00:12:08.932 "name": "BaseBdev1", 00:12:08.932 "uuid": "e988b6d4-6584-4384-b7be-41ef96cb0b20", 00:12:08.932 "is_configured": true, 00:12:08.932 "data_offset": 2048, 00:12:08.932 "data_size": 63488 00:12:08.932 }, 00:12:08.932 { 00:12:08.932 "name": null, 00:12:08.932 "uuid": "71b93141-7375-4ece-af3a-c2faedb776f0", 00:12:08.932 "is_configured": false, 00:12:08.933 "data_offset": 0, 00:12:08.933 "data_size": 63488 00:12:08.933 }, 00:12:08.933 { 00:12:08.933 "name": "BaseBdev3", 00:12:08.933 "uuid": "b499184d-a4b1-4968-b342-ab9f62de9e83", 00:12:08.933 "is_configured": true, 00:12:08.933 "data_offset": 2048, 00:12:08.933 "data_size": 63488 00:12:08.933 }, 00:12:08.933 { 00:12:08.933 "name": "BaseBdev4", 00:12:08.933 "uuid": "bafc4ab7-908b-4163-a2a7-675d8195e1de", 00:12:08.933 "is_configured": true, 00:12:08.933 "data_offset": 2048, 00:12:08.933 "data_size": 63488 00:12:08.933 } 00:12:08.933 ] 00:12:08.933 }' 00:12:08.933 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.933 04:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.192 [2024-12-06 04:03:02.523764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.192 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.451 04:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.451 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.451 "name": "Existed_Raid", 00:12:09.451 "uuid": "43929488-975e-4196-bd0c-cd5781ccf7fa", 00:12:09.451 "strip_size_kb": 64, 00:12:09.451 "state": "configuring", 00:12:09.451 "raid_level": "raid0", 00:12:09.451 "superblock": true, 00:12:09.451 "num_base_bdevs": 4, 00:12:09.451 "num_base_bdevs_discovered": 2, 00:12:09.451 "num_base_bdevs_operational": 4, 00:12:09.451 "base_bdevs_list": [ 00:12:09.451 { 00:12:09.451 "name": "BaseBdev1", 00:12:09.451 "uuid": "e988b6d4-6584-4384-b7be-41ef96cb0b20", 00:12:09.451 "is_configured": true, 00:12:09.451 "data_offset": 2048, 00:12:09.451 "data_size": 63488 00:12:09.451 }, 00:12:09.451 { 00:12:09.451 "name": null, 00:12:09.451 "uuid": "71b93141-7375-4ece-af3a-c2faedb776f0", 00:12:09.451 "is_configured": false, 00:12:09.451 "data_offset": 0, 00:12:09.451 "data_size": 63488 00:12:09.451 }, 00:12:09.451 { 00:12:09.451 "name": null, 00:12:09.451 "uuid": "b499184d-a4b1-4968-b342-ab9f62de9e83", 00:12:09.451 "is_configured": false, 00:12:09.451 "data_offset": 0, 00:12:09.451 "data_size": 63488 00:12:09.451 }, 00:12:09.451 { 00:12:09.451 "name": "BaseBdev4", 00:12:09.451 "uuid": "bafc4ab7-908b-4163-a2a7-675d8195e1de", 00:12:09.451 "is_configured": true, 00:12:09.451 "data_offset": 2048, 00:12:09.451 "data_size": 63488 00:12:09.451 } 00:12:09.451 ] 00:12:09.451 }' 00:12:09.451 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.451 04:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.709 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.709 04:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:09.709 04:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.709 04:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.709 04:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.709 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:09.709 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:09.709 04:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.709 04:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.709 [2024-12-06 04:03:03.038890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:09.709 04:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.709 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:09.709 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.709 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.709 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:09.709 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.709 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.709 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.709 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.709 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.709 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.709 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.709 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.709 04:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.709 04:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.967 04:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.967 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.967 "name": "Existed_Raid", 00:12:09.967 "uuid": "43929488-975e-4196-bd0c-cd5781ccf7fa", 00:12:09.967 "strip_size_kb": 64, 00:12:09.967 "state": "configuring", 00:12:09.967 "raid_level": "raid0", 00:12:09.967 "superblock": true, 00:12:09.967 "num_base_bdevs": 4, 00:12:09.967 "num_base_bdevs_discovered": 3, 00:12:09.967 "num_base_bdevs_operational": 4, 00:12:09.967 "base_bdevs_list": [ 00:12:09.967 { 00:12:09.967 "name": "BaseBdev1", 00:12:09.967 "uuid": "e988b6d4-6584-4384-b7be-41ef96cb0b20", 00:12:09.967 "is_configured": true, 00:12:09.967 "data_offset": 2048, 00:12:09.967 "data_size": 63488 00:12:09.967 }, 00:12:09.967 { 00:12:09.967 "name": null, 00:12:09.967 "uuid": "71b93141-7375-4ece-af3a-c2faedb776f0", 00:12:09.967 "is_configured": false, 00:12:09.967 "data_offset": 0, 00:12:09.967 "data_size": 63488 00:12:09.967 }, 00:12:09.967 { 00:12:09.967 "name": "BaseBdev3", 00:12:09.967 "uuid": "b499184d-a4b1-4968-b342-ab9f62de9e83", 00:12:09.967 "is_configured": true, 00:12:09.967 "data_offset": 2048, 00:12:09.967 "data_size": 63488 00:12:09.967 }, 00:12:09.967 { 00:12:09.967 "name": "BaseBdev4", 00:12:09.967 "uuid": "bafc4ab7-908b-4163-a2a7-675d8195e1de", 00:12:09.967 "is_configured": true, 00:12:09.967 "data_offset": 2048, 00:12:09.967 "data_size": 63488 00:12:09.967 } 00:12:09.967 ] 00:12:09.967 }' 00:12:09.967 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.967 04:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.225 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.225 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:10.225 04:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.225 04:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.225 04:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.225 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:10.225 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:10.225 04:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.225 04:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.225 [2024-12-06 04:03:03.522142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:10.485 04:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.485 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:10.485 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.485 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.485 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:10.485 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.485 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.485 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.485 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.485 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.485 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.485 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.485 04:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.485 04:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.485 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.485 04:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.485 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.485 "name": "Existed_Raid", 00:12:10.485 "uuid": "43929488-975e-4196-bd0c-cd5781ccf7fa", 00:12:10.485 "strip_size_kb": 64, 00:12:10.485 "state": "configuring", 00:12:10.485 "raid_level": "raid0", 00:12:10.485 "superblock": true, 00:12:10.485 "num_base_bdevs": 4, 00:12:10.485 "num_base_bdevs_discovered": 2, 00:12:10.485 "num_base_bdevs_operational": 4, 00:12:10.485 "base_bdevs_list": [ 00:12:10.485 { 00:12:10.485 "name": null, 00:12:10.485 "uuid": "e988b6d4-6584-4384-b7be-41ef96cb0b20", 00:12:10.485 "is_configured": false, 00:12:10.485 "data_offset": 0, 00:12:10.485 "data_size": 63488 00:12:10.485 }, 00:12:10.485 { 00:12:10.485 "name": null, 00:12:10.486 "uuid": "71b93141-7375-4ece-af3a-c2faedb776f0", 00:12:10.486 "is_configured": false, 00:12:10.486 "data_offset": 0, 00:12:10.486 "data_size": 63488 00:12:10.486 }, 00:12:10.486 { 00:12:10.486 "name": "BaseBdev3", 00:12:10.486 "uuid": "b499184d-a4b1-4968-b342-ab9f62de9e83", 00:12:10.486 "is_configured": true, 00:12:10.486 "data_offset": 2048, 00:12:10.486 "data_size": 63488 00:12:10.486 }, 00:12:10.486 { 00:12:10.486 "name": "BaseBdev4", 00:12:10.486 "uuid": "bafc4ab7-908b-4163-a2a7-675d8195e1de", 00:12:10.486 "is_configured": true, 00:12:10.486 "data_offset": 2048, 00:12:10.486 "data_size": 63488 00:12:10.486 } 00:12:10.486 ] 00:12:10.486 }' 00:12:10.486 04:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.486 04:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.743 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:10.743 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.743 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.743 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.001 [2024-12-06 04:03:04.115234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.001 "name": "Existed_Raid", 00:12:11.001 "uuid": "43929488-975e-4196-bd0c-cd5781ccf7fa", 00:12:11.001 "strip_size_kb": 64, 00:12:11.001 "state": "configuring", 00:12:11.001 "raid_level": "raid0", 00:12:11.001 "superblock": true, 00:12:11.001 "num_base_bdevs": 4, 00:12:11.001 "num_base_bdevs_discovered": 3, 00:12:11.001 "num_base_bdevs_operational": 4, 00:12:11.001 "base_bdevs_list": [ 00:12:11.001 { 00:12:11.001 "name": null, 00:12:11.001 "uuid": "e988b6d4-6584-4384-b7be-41ef96cb0b20", 00:12:11.001 "is_configured": false, 00:12:11.001 "data_offset": 0, 00:12:11.001 "data_size": 63488 00:12:11.001 }, 00:12:11.001 { 00:12:11.001 "name": "BaseBdev2", 00:12:11.001 "uuid": "71b93141-7375-4ece-af3a-c2faedb776f0", 00:12:11.001 "is_configured": true, 00:12:11.001 "data_offset": 2048, 00:12:11.001 "data_size": 63488 00:12:11.001 }, 00:12:11.001 { 00:12:11.001 "name": "BaseBdev3", 00:12:11.001 "uuid": "b499184d-a4b1-4968-b342-ab9f62de9e83", 00:12:11.001 "is_configured": true, 00:12:11.001 "data_offset": 2048, 00:12:11.001 "data_size": 63488 00:12:11.001 }, 00:12:11.001 { 00:12:11.001 "name": "BaseBdev4", 00:12:11.001 "uuid": "bafc4ab7-908b-4163-a2a7-675d8195e1de", 00:12:11.001 "is_configured": true, 00:12:11.001 "data_offset": 2048, 00:12:11.001 "data_size": 63488 00:12:11.001 } 00:12:11.001 ] 00:12:11.001 }' 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.001 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.259 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.259 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:11.259 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.259 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.259 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.259 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:11.259 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:11.259 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.259 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.259 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.259 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.518 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e988b6d4-6584-4384-b7be-41ef96cb0b20 00:12:11.518 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.518 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.518 [2024-12-06 04:03:04.672991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:11.518 [2024-12-06 04:03:04.673356] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:11.518 [2024-12-06 04:03:04.673382] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:11.518 NewBaseBdev 00:12:11.518 [2024-12-06 04:03:04.673722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:11.518 [2024-12-06 04:03:04.673898] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:11.518 [2024-12-06 04:03:04.673913] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:11.518 [2024-12-06 04:03:04.674081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.518 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.518 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:11.518 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:11.518 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.518 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:11.518 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.518 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.518 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:11.518 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.518 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.518 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.518 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:11.518 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.518 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.518 [ 00:12:11.518 { 00:12:11.518 "name": "NewBaseBdev", 00:12:11.518 "aliases": [ 00:12:11.518 "e988b6d4-6584-4384-b7be-41ef96cb0b20" 00:12:11.518 ], 00:12:11.518 "product_name": "Malloc disk", 00:12:11.518 "block_size": 512, 00:12:11.518 "num_blocks": 65536, 00:12:11.518 "uuid": "e988b6d4-6584-4384-b7be-41ef96cb0b20", 00:12:11.518 "assigned_rate_limits": { 00:12:11.518 "rw_ios_per_sec": 0, 00:12:11.518 "rw_mbytes_per_sec": 0, 00:12:11.518 "r_mbytes_per_sec": 0, 00:12:11.518 "w_mbytes_per_sec": 0 00:12:11.518 }, 00:12:11.518 "claimed": true, 00:12:11.518 "claim_type": "exclusive_write", 00:12:11.518 "zoned": false, 00:12:11.518 "supported_io_types": { 00:12:11.518 "read": true, 00:12:11.518 "write": true, 00:12:11.518 "unmap": true, 00:12:11.518 "flush": true, 00:12:11.518 "reset": true, 00:12:11.518 "nvme_admin": false, 00:12:11.518 "nvme_io": false, 00:12:11.518 "nvme_io_md": false, 00:12:11.518 "write_zeroes": true, 00:12:11.518 "zcopy": true, 00:12:11.518 "get_zone_info": false, 00:12:11.518 "zone_management": false, 00:12:11.518 "zone_append": false, 00:12:11.518 "compare": false, 00:12:11.518 "compare_and_write": false, 00:12:11.518 "abort": true, 00:12:11.518 "seek_hole": false, 00:12:11.518 "seek_data": false, 00:12:11.518 "copy": true, 00:12:11.518 "nvme_iov_md": false 00:12:11.518 }, 00:12:11.518 "memory_domains": [ 00:12:11.518 { 00:12:11.518 "dma_device_id": "system", 00:12:11.518 "dma_device_type": 1 00:12:11.518 }, 00:12:11.518 { 00:12:11.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.518 "dma_device_type": 2 00:12:11.518 } 00:12:11.518 ], 00:12:11.518 "driver_specific": {} 00:12:11.518 } 00:12:11.518 ] 00:12:11.518 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.518 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:11.518 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:11.519 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.519 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.519 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:11.519 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.519 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.519 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.519 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.519 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.519 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.519 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.519 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.519 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.519 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.519 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.519 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.519 "name": "Existed_Raid", 00:12:11.519 "uuid": "43929488-975e-4196-bd0c-cd5781ccf7fa", 00:12:11.519 "strip_size_kb": 64, 00:12:11.519 "state": "online", 00:12:11.519 "raid_level": "raid0", 00:12:11.519 "superblock": true, 00:12:11.519 "num_base_bdevs": 4, 00:12:11.519 "num_base_bdevs_discovered": 4, 00:12:11.519 "num_base_bdevs_operational": 4, 00:12:11.519 "base_bdevs_list": [ 00:12:11.519 { 00:12:11.519 "name": "NewBaseBdev", 00:12:11.519 "uuid": "e988b6d4-6584-4384-b7be-41ef96cb0b20", 00:12:11.519 "is_configured": true, 00:12:11.519 "data_offset": 2048, 00:12:11.519 "data_size": 63488 00:12:11.519 }, 00:12:11.519 { 00:12:11.519 "name": "BaseBdev2", 00:12:11.519 "uuid": "71b93141-7375-4ece-af3a-c2faedb776f0", 00:12:11.519 "is_configured": true, 00:12:11.519 "data_offset": 2048, 00:12:11.519 "data_size": 63488 00:12:11.519 }, 00:12:11.519 { 00:12:11.519 "name": "BaseBdev3", 00:12:11.519 "uuid": "b499184d-a4b1-4968-b342-ab9f62de9e83", 00:12:11.519 "is_configured": true, 00:12:11.519 "data_offset": 2048, 00:12:11.519 "data_size": 63488 00:12:11.519 }, 00:12:11.519 { 00:12:11.519 "name": "BaseBdev4", 00:12:11.519 "uuid": "bafc4ab7-908b-4163-a2a7-675d8195e1de", 00:12:11.519 "is_configured": true, 00:12:11.519 "data_offset": 2048, 00:12:11.519 "data_size": 63488 00:12:11.519 } 00:12:11.519 ] 00:12:11.519 }' 00:12:11.519 04:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.519 04:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.085 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:12.085 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:12.085 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:12.085 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:12.085 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:12.085 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:12.085 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:12.085 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:12.085 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.085 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.085 [2024-12-06 04:03:05.212802] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:12.085 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.085 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:12.085 "name": "Existed_Raid", 00:12:12.085 "aliases": [ 00:12:12.085 "43929488-975e-4196-bd0c-cd5781ccf7fa" 00:12:12.085 ], 00:12:12.085 "product_name": "Raid Volume", 00:12:12.085 "block_size": 512, 00:12:12.085 "num_blocks": 253952, 00:12:12.085 "uuid": "43929488-975e-4196-bd0c-cd5781ccf7fa", 00:12:12.085 "assigned_rate_limits": { 00:12:12.085 "rw_ios_per_sec": 0, 00:12:12.085 "rw_mbytes_per_sec": 0, 00:12:12.085 "r_mbytes_per_sec": 0, 00:12:12.085 "w_mbytes_per_sec": 0 00:12:12.085 }, 00:12:12.085 "claimed": false, 00:12:12.085 "zoned": false, 00:12:12.085 "supported_io_types": { 00:12:12.085 "read": true, 00:12:12.085 "write": true, 00:12:12.085 "unmap": true, 00:12:12.085 "flush": true, 00:12:12.085 "reset": true, 00:12:12.085 "nvme_admin": false, 00:12:12.085 "nvme_io": false, 00:12:12.085 "nvme_io_md": false, 00:12:12.085 "write_zeroes": true, 00:12:12.085 "zcopy": false, 00:12:12.085 "get_zone_info": false, 00:12:12.085 "zone_management": false, 00:12:12.085 "zone_append": false, 00:12:12.085 "compare": false, 00:12:12.085 "compare_and_write": false, 00:12:12.085 "abort": false, 00:12:12.085 "seek_hole": false, 00:12:12.085 "seek_data": false, 00:12:12.085 "copy": false, 00:12:12.085 "nvme_iov_md": false 00:12:12.085 }, 00:12:12.085 "memory_domains": [ 00:12:12.085 { 00:12:12.085 "dma_device_id": "system", 00:12:12.085 "dma_device_type": 1 00:12:12.085 }, 00:12:12.085 { 00:12:12.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.085 "dma_device_type": 2 00:12:12.085 }, 00:12:12.085 { 00:12:12.085 "dma_device_id": "system", 00:12:12.085 "dma_device_type": 1 00:12:12.085 }, 00:12:12.085 { 00:12:12.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.085 "dma_device_type": 2 00:12:12.085 }, 00:12:12.085 { 00:12:12.085 "dma_device_id": "system", 00:12:12.085 "dma_device_type": 1 00:12:12.085 }, 00:12:12.085 { 00:12:12.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.085 "dma_device_type": 2 00:12:12.085 }, 00:12:12.085 { 00:12:12.085 "dma_device_id": "system", 00:12:12.085 "dma_device_type": 1 00:12:12.085 }, 00:12:12.085 { 00:12:12.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.085 "dma_device_type": 2 00:12:12.085 } 00:12:12.085 ], 00:12:12.085 "driver_specific": { 00:12:12.085 "raid": { 00:12:12.085 "uuid": "43929488-975e-4196-bd0c-cd5781ccf7fa", 00:12:12.085 "strip_size_kb": 64, 00:12:12.085 "state": "online", 00:12:12.085 "raid_level": "raid0", 00:12:12.085 "superblock": true, 00:12:12.085 "num_base_bdevs": 4, 00:12:12.085 "num_base_bdevs_discovered": 4, 00:12:12.085 "num_base_bdevs_operational": 4, 00:12:12.085 "base_bdevs_list": [ 00:12:12.085 { 00:12:12.085 "name": "NewBaseBdev", 00:12:12.085 "uuid": "e988b6d4-6584-4384-b7be-41ef96cb0b20", 00:12:12.085 "is_configured": true, 00:12:12.085 "data_offset": 2048, 00:12:12.085 "data_size": 63488 00:12:12.085 }, 00:12:12.085 { 00:12:12.085 "name": "BaseBdev2", 00:12:12.085 "uuid": "71b93141-7375-4ece-af3a-c2faedb776f0", 00:12:12.085 "is_configured": true, 00:12:12.085 "data_offset": 2048, 00:12:12.085 "data_size": 63488 00:12:12.085 }, 00:12:12.085 { 00:12:12.085 "name": "BaseBdev3", 00:12:12.085 "uuid": "b499184d-a4b1-4968-b342-ab9f62de9e83", 00:12:12.085 "is_configured": true, 00:12:12.085 "data_offset": 2048, 00:12:12.085 "data_size": 63488 00:12:12.085 }, 00:12:12.085 { 00:12:12.085 "name": "BaseBdev4", 00:12:12.085 "uuid": "bafc4ab7-908b-4163-a2a7-675d8195e1de", 00:12:12.085 "is_configured": true, 00:12:12.085 "data_offset": 2048, 00:12:12.085 "data_size": 63488 00:12:12.085 } 00:12:12.085 ] 00:12:12.085 } 00:12:12.085 } 00:12:12.085 }' 00:12:12.086 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:12.086 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:12.086 BaseBdev2 00:12:12.086 BaseBdev3 00:12:12.086 BaseBdev4' 00:12:12.086 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.086 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:12.086 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.086 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.086 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:12.086 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.086 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.086 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.086 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.086 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.086 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.086 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:12.086 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.086 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.086 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.343 [2024-12-06 04:03:05.555693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:12.343 [2024-12-06 04:03:05.555745] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.343 [2024-12-06 04:03:05.555858] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.343 [2024-12-06 04:03:05.555952] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:12.343 [2024-12-06 04:03:05.555970] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70129 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70129 ']' 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70129 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70129 00:12:12.343 killing process with pid 70129 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70129' 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70129 00:12:12.343 04:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70129 00:12:12.343 [2024-12-06 04:03:05.597143] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:12.906 [2024-12-06 04:03:06.111329] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:14.279 04:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:14.279 00:12:14.279 real 0m12.174s 00:12:14.279 user 0m19.136s 00:12:14.279 sys 0m1.952s 00:12:14.279 04:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.279 04:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.279 ************************************ 00:12:14.279 END TEST raid_state_function_test_sb 00:12:14.279 ************************************ 00:12:14.279 04:03:07 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:12:14.279 04:03:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:14.279 04:03:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.279 04:03:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:14.279 ************************************ 00:12:14.279 START TEST raid_superblock_test 00:12:14.279 ************************************ 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70809 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70809 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70809 ']' 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.279 04:03:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:14.543 [2024-12-06 04:03:07.703154] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:12:14.543 [2024-12-06 04:03:07.703289] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70809 ] 00:12:14.543 [2024-12-06 04:03:07.881159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.810 [2024-12-06 04:03:08.042706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.068 [2024-12-06 04:03:08.320394] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.068 [2024-12-06 04:03:08.320472] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.327 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.327 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:15.327 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:15.327 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:15.327 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.328 malloc1 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.328 [2024-12-06 04:03:08.641018] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:15.328 [2024-12-06 04:03:08.641120] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.328 [2024-12-06 04:03:08.641150] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:15.328 [2024-12-06 04:03:08.641163] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.328 [2024-12-06 04:03:08.644013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.328 [2024-12-06 04:03:08.644069] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:15.328 pt1 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.328 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.586 malloc2 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.586 [2024-12-06 04:03:08.709909] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:15.586 [2024-12-06 04:03:08.709990] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.586 [2024-12-06 04:03:08.710024] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:15.586 [2024-12-06 04:03:08.710036] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.586 [2024-12-06 04:03:08.712723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.586 [2024-12-06 04:03:08.712766] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:15.586 pt2 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.586 malloc3 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.586 [2024-12-06 04:03:08.794593] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:15.586 [2024-12-06 04:03:08.794675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.586 [2024-12-06 04:03:08.794704] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:15.586 [2024-12-06 04:03:08.794715] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.586 [2024-12-06 04:03:08.797522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.586 [2024-12-06 04:03:08.797566] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:15.586 pt3 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:15.586 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.587 malloc4 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.587 [2024-12-06 04:03:08.863513] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:15.587 [2024-12-06 04:03:08.863605] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.587 [2024-12-06 04:03:08.863635] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:15.587 [2024-12-06 04:03:08.863647] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.587 [2024-12-06 04:03:08.866528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.587 [2024-12-06 04:03:08.866572] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:15.587 pt4 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.587 [2024-12-06 04:03:08.875626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:15.587 [2024-12-06 04:03:08.878079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:15.587 [2024-12-06 04:03:08.878189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:15.587 [2024-12-06 04:03:08.878250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:15.587 [2024-12-06 04:03:08.878481] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:15.587 [2024-12-06 04:03:08.878501] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:15.587 [2024-12-06 04:03:08.878848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:15.587 [2024-12-06 04:03:08.879091] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:15.587 [2024-12-06 04:03:08.879115] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:15.587 [2024-12-06 04:03:08.879321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.587 "name": "raid_bdev1", 00:12:15.587 "uuid": "76d13663-b15e-44f9-9783-b2b3cda574df", 00:12:15.587 "strip_size_kb": 64, 00:12:15.587 "state": "online", 00:12:15.587 "raid_level": "raid0", 00:12:15.587 "superblock": true, 00:12:15.587 "num_base_bdevs": 4, 00:12:15.587 "num_base_bdevs_discovered": 4, 00:12:15.587 "num_base_bdevs_operational": 4, 00:12:15.587 "base_bdevs_list": [ 00:12:15.587 { 00:12:15.587 "name": "pt1", 00:12:15.587 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:15.587 "is_configured": true, 00:12:15.587 "data_offset": 2048, 00:12:15.587 "data_size": 63488 00:12:15.587 }, 00:12:15.587 { 00:12:15.587 "name": "pt2", 00:12:15.587 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:15.587 "is_configured": true, 00:12:15.587 "data_offset": 2048, 00:12:15.587 "data_size": 63488 00:12:15.587 }, 00:12:15.587 { 00:12:15.587 "name": "pt3", 00:12:15.587 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:15.587 "is_configured": true, 00:12:15.587 "data_offset": 2048, 00:12:15.587 "data_size": 63488 00:12:15.587 }, 00:12:15.587 { 00:12:15.587 "name": "pt4", 00:12:15.587 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:15.587 "is_configured": true, 00:12:15.587 "data_offset": 2048, 00:12:15.587 "data_size": 63488 00:12:15.587 } 00:12:15.587 ] 00:12:15.587 }' 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.587 04:03:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.152 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:16.152 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:16.152 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:16.152 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:16.152 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:16.152 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:16.152 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:16.152 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:16.152 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.152 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.152 [2024-12-06 04:03:09.327466] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.153 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.153 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:16.153 "name": "raid_bdev1", 00:12:16.153 "aliases": [ 00:12:16.153 "76d13663-b15e-44f9-9783-b2b3cda574df" 00:12:16.153 ], 00:12:16.153 "product_name": "Raid Volume", 00:12:16.153 "block_size": 512, 00:12:16.153 "num_blocks": 253952, 00:12:16.153 "uuid": "76d13663-b15e-44f9-9783-b2b3cda574df", 00:12:16.153 "assigned_rate_limits": { 00:12:16.153 "rw_ios_per_sec": 0, 00:12:16.153 "rw_mbytes_per_sec": 0, 00:12:16.153 "r_mbytes_per_sec": 0, 00:12:16.153 "w_mbytes_per_sec": 0 00:12:16.153 }, 00:12:16.153 "claimed": false, 00:12:16.153 "zoned": false, 00:12:16.153 "supported_io_types": { 00:12:16.153 "read": true, 00:12:16.153 "write": true, 00:12:16.153 "unmap": true, 00:12:16.153 "flush": true, 00:12:16.153 "reset": true, 00:12:16.153 "nvme_admin": false, 00:12:16.153 "nvme_io": false, 00:12:16.153 "nvme_io_md": false, 00:12:16.153 "write_zeroes": true, 00:12:16.153 "zcopy": false, 00:12:16.153 "get_zone_info": false, 00:12:16.153 "zone_management": false, 00:12:16.153 "zone_append": false, 00:12:16.153 "compare": false, 00:12:16.153 "compare_and_write": false, 00:12:16.153 "abort": false, 00:12:16.153 "seek_hole": false, 00:12:16.153 "seek_data": false, 00:12:16.153 "copy": false, 00:12:16.153 "nvme_iov_md": false 00:12:16.153 }, 00:12:16.153 "memory_domains": [ 00:12:16.153 { 00:12:16.153 "dma_device_id": "system", 00:12:16.153 "dma_device_type": 1 00:12:16.153 }, 00:12:16.153 { 00:12:16.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.153 "dma_device_type": 2 00:12:16.153 }, 00:12:16.153 { 00:12:16.153 "dma_device_id": "system", 00:12:16.153 "dma_device_type": 1 00:12:16.153 }, 00:12:16.153 { 00:12:16.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.153 "dma_device_type": 2 00:12:16.153 }, 00:12:16.153 { 00:12:16.153 "dma_device_id": "system", 00:12:16.153 "dma_device_type": 1 00:12:16.153 }, 00:12:16.153 { 00:12:16.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.153 "dma_device_type": 2 00:12:16.153 }, 00:12:16.153 { 00:12:16.153 "dma_device_id": "system", 00:12:16.153 "dma_device_type": 1 00:12:16.153 }, 00:12:16.153 { 00:12:16.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.153 "dma_device_type": 2 00:12:16.153 } 00:12:16.153 ], 00:12:16.153 "driver_specific": { 00:12:16.153 "raid": { 00:12:16.153 "uuid": "76d13663-b15e-44f9-9783-b2b3cda574df", 00:12:16.153 "strip_size_kb": 64, 00:12:16.153 "state": "online", 00:12:16.153 "raid_level": "raid0", 00:12:16.153 "superblock": true, 00:12:16.153 "num_base_bdevs": 4, 00:12:16.153 "num_base_bdevs_discovered": 4, 00:12:16.153 "num_base_bdevs_operational": 4, 00:12:16.153 "base_bdevs_list": [ 00:12:16.153 { 00:12:16.153 "name": "pt1", 00:12:16.153 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:16.153 "is_configured": true, 00:12:16.153 "data_offset": 2048, 00:12:16.153 "data_size": 63488 00:12:16.153 }, 00:12:16.153 { 00:12:16.153 "name": "pt2", 00:12:16.153 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:16.153 "is_configured": true, 00:12:16.153 "data_offset": 2048, 00:12:16.153 "data_size": 63488 00:12:16.153 }, 00:12:16.153 { 00:12:16.153 "name": "pt3", 00:12:16.153 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:16.153 "is_configured": true, 00:12:16.153 "data_offset": 2048, 00:12:16.153 "data_size": 63488 00:12:16.153 }, 00:12:16.153 { 00:12:16.153 "name": "pt4", 00:12:16.153 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:16.153 "is_configured": true, 00:12:16.153 "data_offset": 2048, 00:12:16.153 "data_size": 63488 00:12:16.153 } 00:12:16.153 ] 00:12:16.153 } 00:12:16.153 } 00:12:16.153 }' 00:12:16.153 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:16.153 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:16.153 pt2 00:12:16.153 pt3 00:12:16.153 pt4' 00:12:16.153 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.153 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:16.153 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.153 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:16.153 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.153 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.153 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.153 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:16.412 [2024-12-06 04:03:09.662856] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=76d13663-b15e-44f9-9783-b2b3cda574df 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 76d13663-b15e-44f9-9783-b2b3cda574df ']' 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.412 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.412 [2024-12-06 04:03:09.710395] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:16.412 [2024-12-06 04:03:09.710447] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:16.412 [2024-12-06 04:03:09.710576] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:16.412 [2024-12-06 04:03:09.710666] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:16.412 [2024-12-06 04:03:09.710686] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:16.413 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.413 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.413 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.413 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.413 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:16.413 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.673 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.673 [2024-12-06 04:03:09.866232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:16.673 [2024-12-06 04:03:09.868841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:16.673 [2024-12-06 04:03:09.868907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:16.673 [2024-12-06 04:03:09.868950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:16.673 [2024-12-06 04:03:09.869023] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:16.673 [2024-12-06 04:03:09.869109] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:16.673 [2024-12-06 04:03:09.869135] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:16.673 [2024-12-06 04:03:09.869158] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:16.673 [2024-12-06 04:03:09.869176] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:16.673 [2024-12-06 04:03:09.869193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:16.673 request: 00:12:16.673 { 00:12:16.673 "name": "raid_bdev1", 00:12:16.673 "raid_level": "raid0", 00:12:16.673 "base_bdevs": [ 00:12:16.673 "malloc1", 00:12:16.673 "malloc2", 00:12:16.674 "malloc3", 00:12:16.674 "malloc4" 00:12:16.674 ], 00:12:16.674 "strip_size_kb": 64, 00:12:16.674 "superblock": false, 00:12:16.674 "method": "bdev_raid_create", 00:12:16.674 "req_id": 1 00:12:16.674 } 00:12:16.674 Got JSON-RPC error response 00:12:16.674 response: 00:12:16.674 { 00:12:16.674 "code": -17, 00:12:16.674 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:16.674 } 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.674 [2024-12-06 04:03:09.926084] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:16.674 [2024-12-06 04:03:09.926187] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.674 [2024-12-06 04:03:09.926216] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:16.674 [2024-12-06 04:03:09.926231] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.674 [2024-12-06 04:03:09.929169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.674 [2024-12-06 04:03:09.929216] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:16.674 [2024-12-06 04:03:09.929335] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:16.674 [2024-12-06 04:03:09.929406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:16.674 pt1 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.674 "name": "raid_bdev1", 00:12:16.674 "uuid": "76d13663-b15e-44f9-9783-b2b3cda574df", 00:12:16.674 "strip_size_kb": 64, 00:12:16.674 "state": "configuring", 00:12:16.674 "raid_level": "raid0", 00:12:16.674 "superblock": true, 00:12:16.674 "num_base_bdevs": 4, 00:12:16.674 "num_base_bdevs_discovered": 1, 00:12:16.674 "num_base_bdevs_operational": 4, 00:12:16.674 "base_bdevs_list": [ 00:12:16.674 { 00:12:16.674 "name": "pt1", 00:12:16.674 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:16.674 "is_configured": true, 00:12:16.674 "data_offset": 2048, 00:12:16.674 "data_size": 63488 00:12:16.674 }, 00:12:16.674 { 00:12:16.674 "name": null, 00:12:16.674 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:16.674 "is_configured": false, 00:12:16.674 "data_offset": 2048, 00:12:16.674 "data_size": 63488 00:12:16.674 }, 00:12:16.674 { 00:12:16.674 "name": null, 00:12:16.674 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:16.674 "is_configured": false, 00:12:16.674 "data_offset": 2048, 00:12:16.674 "data_size": 63488 00:12:16.674 }, 00:12:16.674 { 00:12:16.674 "name": null, 00:12:16.674 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:16.674 "is_configured": false, 00:12:16.674 "data_offset": 2048, 00:12:16.674 "data_size": 63488 00:12:16.674 } 00:12:16.674 ] 00:12:16.674 }' 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.674 04:03:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.240 [2024-12-06 04:03:10.385331] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:17.240 [2024-12-06 04:03:10.385448] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.240 [2024-12-06 04:03:10.385475] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:17.240 [2024-12-06 04:03:10.385490] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.240 [2024-12-06 04:03:10.386107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.240 [2024-12-06 04:03:10.386145] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:17.240 [2024-12-06 04:03:10.386257] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:17.240 [2024-12-06 04:03:10.386296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:17.240 pt2 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.240 [2024-12-06 04:03:10.393290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.240 "name": "raid_bdev1", 00:12:17.240 "uuid": "76d13663-b15e-44f9-9783-b2b3cda574df", 00:12:17.240 "strip_size_kb": 64, 00:12:17.240 "state": "configuring", 00:12:17.240 "raid_level": "raid0", 00:12:17.240 "superblock": true, 00:12:17.240 "num_base_bdevs": 4, 00:12:17.240 "num_base_bdevs_discovered": 1, 00:12:17.240 "num_base_bdevs_operational": 4, 00:12:17.240 "base_bdevs_list": [ 00:12:17.240 { 00:12:17.240 "name": "pt1", 00:12:17.240 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:17.240 "is_configured": true, 00:12:17.240 "data_offset": 2048, 00:12:17.240 "data_size": 63488 00:12:17.240 }, 00:12:17.240 { 00:12:17.240 "name": null, 00:12:17.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.240 "is_configured": false, 00:12:17.240 "data_offset": 0, 00:12:17.240 "data_size": 63488 00:12:17.240 }, 00:12:17.240 { 00:12:17.240 "name": null, 00:12:17.240 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:17.240 "is_configured": false, 00:12:17.240 "data_offset": 2048, 00:12:17.240 "data_size": 63488 00:12:17.240 }, 00:12:17.240 { 00:12:17.240 "name": null, 00:12:17.240 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:17.240 "is_configured": false, 00:12:17.240 "data_offset": 2048, 00:12:17.240 "data_size": 63488 00:12:17.240 } 00:12:17.240 ] 00:12:17.240 }' 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.240 04:03:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.499 [2024-12-06 04:03:10.808696] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:17.499 [2024-12-06 04:03:10.808806] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.499 [2024-12-06 04:03:10.808835] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:17.499 [2024-12-06 04:03:10.808848] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.499 [2024-12-06 04:03:10.809462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.499 [2024-12-06 04:03:10.809497] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:17.499 [2024-12-06 04:03:10.809612] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:17.499 [2024-12-06 04:03:10.809646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:17.499 pt2 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.499 [2024-12-06 04:03:10.816592] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:17.499 [2024-12-06 04:03:10.816647] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.499 [2024-12-06 04:03:10.816668] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:17.499 [2024-12-06 04:03:10.816677] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.499 [2024-12-06 04:03:10.817136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.499 [2024-12-06 04:03:10.817165] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:17.499 [2024-12-06 04:03:10.817236] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:17.499 [2024-12-06 04:03:10.817264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:17.499 pt3 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.499 [2024-12-06 04:03:10.824546] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:17.499 [2024-12-06 04:03:10.824589] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.499 [2024-12-06 04:03:10.824606] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:17.499 [2024-12-06 04:03:10.824616] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.499 [2024-12-06 04:03:10.825018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.499 [2024-12-06 04:03:10.825063] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:17.499 [2024-12-06 04:03:10.825132] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:17.499 [2024-12-06 04:03:10.825156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:17.499 [2024-12-06 04:03:10.825298] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:17.499 [2024-12-06 04:03:10.825315] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:17.499 [2024-12-06 04:03:10.825616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:17.499 [2024-12-06 04:03:10.825789] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:17.499 [2024-12-06 04:03:10.825810] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:17.499 [2024-12-06 04:03:10.825945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.499 pt4 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.499 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.500 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.500 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.500 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.500 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.500 04:03:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.500 04:03:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.500 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.500 04:03:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.758 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.758 "name": "raid_bdev1", 00:12:17.758 "uuid": "76d13663-b15e-44f9-9783-b2b3cda574df", 00:12:17.758 "strip_size_kb": 64, 00:12:17.758 "state": "online", 00:12:17.758 "raid_level": "raid0", 00:12:17.758 "superblock": true, 00:12:17.758 "num_base_bdevs": 4, 00:12:17.758 "num_base_bdevs_discovered": 4, 00:12:17.758 "num_base_bdevs_operational": 4, 00:12:17.758 "base_bdevs_list": [ 00:12:17.758 { 00:12:17.758 "name": "pt1", 00:12:17.758 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:17.758 "is_configured": true, 00:12:17.758 "data_offset": 2048, 00:12:17.758 "data_size": 63488 00:12:17.758 }, 00:12:17.758 { 00:12:17.758 "name": "pt2", 00:12:17.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.758 "is_configured": true, 00:12:17.758 "data_offset": 2048, 00:12:17.758 "data_size": 63488 00:12:17.758 }, 00:12:17.758 { 00:12:17.758 "name": "pt3", 00:12:17.758 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:17.758 "is_configured": true, 00:12:17.758 "data_offset": 2048, 00:12:17.758 "data_size": 63488 00:12:17.758 }, 00:12:17.758 { 00:12:17.758 "name": "pt4", 00:12:17.758 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:17.758 "is_configured": true, 00:12:17.758 "data_offset": 2048, 00:12:17.758 "data_size": 63488 00:12:17.758 } 00:12:17.758 ] 00:12:17.758 }' 00:12:17.758 04:03:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.758 04:03:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.017 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:18.017 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:18.017 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:18.017 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:18.017 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:18.017 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:18.017 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:18.017 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.017 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.017 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:18.017 [2024-12-06 04:03:11.240684] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.017 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.017 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:18.017 "name": "raid_bdev1", 00:12:18.017 "aliases": [ 00:12:18.017 "76d13663-b15e-44f9-9783-b2b3cda574df" 00:12:18.017 ], 00:12:18.017 "product_name": "Raid Volume", 00:12:18.017 "block_size": 512, 00:12:18.017 "num_blocks": 253952, 00:12:18.017 "uuid": "76d13663-b15e-44f9-9783-b2b3cda574df", 00:12:18.017 "assigned_rate_limits": { 00:12:18.017 "rw_ios_per_sec": 0, 00:12:18.017 "rw_mbytes_per_sec": 0, 00:12:18.017 "r_mbytes_per_sec": 0, 00:12:18.017 "w_mbytes_per_sec": 0 00:12:18.017 }, 00:12:18.017 "claimed": false, 00:12:18.017 "zoned": false, 00:12:18.017 "supported_io_types": { 00:12:18.017 "read": true, 00:12:18.017 "write": true, 00:12:18.017 "unmap": true, 00:12:18.017 "flush": true, 00:12:18.017 "reset": true, 00:12:18.017 "nvme_admin": false, 00:12:18.017 "nvme_io": false, 00:12:18.017 "nvme_io_md": false, 00:12:18.017 "write_zeroes": true, 00:12:18.017 "zcopy": false, 00:12:18.017 "get_zone_info": false, 00:12:18.017 "zone_management": false, 00:12:18.017 "zone_append": false, 00:12:18.017 "compare": false, 00:12:18.017 "compare_and_write": false, 00:12:18.017 "abort": false, 00:12:18.017 "seek_hole": false, 00:12:18.017 "seek_data": false, 00:12:18.017 "copy": false, 00:12:18.017 "nvme_iov_md": false 00:12:18.017 }, 00:12:18.017 "memory_domains": [ 00:12:18.017 { 00:12:18.017 "dma_device_id": "system", 00:12:18.017 "dma_device_type": 1 00:12:18.017 }, 00:12:18.017 { 00:12:18.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.017 "dma_device_type": 2 00:12:18.017 }, 00:12:18.017 { 00:12:18.017 "dma_device_id": "system", 00:12:18.017 "dma_device_type": 1 00:12:18.017 }, 00:12:18.017 { 00:12:18.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.017 "dma_device_type": 2 00:12:18.017 }, 00:12:18.017 { 00:12:18.017 "dma_device_id": "system", 00:12:18.017 "dma_device_type": 1 00:12:18.017 }, 00:12:18.017 { 00:12:18.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.017 "dma_device_type": 2 00:12:18.017 }, 00:12:18.017 { 00:12:18.017 "dma_device_id": "system", 00:12:18.017 "dma_device_type": 1 00:12:18.017 }, 00:12:18.017 { 00:12:18.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.017 "dma_device_type": 2 00:12:18.017 } 00:12:18.017 ], 00:12:18.017 "driver_specific": { 00:12:18.017 "raid": { 00:12:18.017 "uuid": "76d13663-b15e-44f9-9783-b2b3cda574df", 00:12:18.017 "strip_size_kb": 64, 00:12:18.017 "state": "online", 00:12:18.017 "raid_level": "raid0", 00:12:18.017 "superblock": true, 00:12:18.017 "num_base_bdevs": 4, 00:12:18.017 "num_base_bdevs_discovered": 4, 00:12:18.017 "num_base_bdevs_operational": 4, 00:12:18.017 "base_bdevs_list": [ 00:12:18.017 { 00:12:18.017 "name": "pt1", 00:12:18.017 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:18.017 "is_configured": true, 00:12:18.017 "data_offset": 2048, 00:12:18.017 "data_size": 63488 00:12:18.017 }, 00:12:18.017 { 00:12:18.017 "name": "pt2", 00:12:18.017 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:18.017 "is_configured": true, 00:12:18.017 "data_offset": 2048, 00:12:18.017 "data_size": 63488 00:12:18.017 }, 00:12:18.017 { 00:12:18.017 "name": "pt3", 00:12:18.017 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:18.017 "is_configured": true, 00:12:18.017 "data_offset": 2048, 00:12:18.017 "data_size": 63488 00:12:18.017 }, 00:12:18.017 { 00:12:18.017 "name": "pt4", 00:12:18.017 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:18.017 "is_configured": true, 00:12:18.017 "data_offset": 2048, 00:12:18.017 "data_size": 63488 00:12:18.017 } 00:12:18.017 ] 00:12:18.017 } 00:12:18.017 } 00:12:18.017 }' 00:12:18.017 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:18.017 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:18.017 pt2 00:12:18.017 pt3 00:12:18.017 pt4' 00:12:18.017 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.275 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:18.275 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.275 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.275 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:18.275 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.275 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.275 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.275 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.275 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.275 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.275 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:18.275 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.276 [2024-12-06 04:03:11.516152] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 76d13663-b15e-44f9-9783-b2b3cda574df '!=' 76d13663-b15e-44f9-9783-b2b3cda574df ']' 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70809 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70809 ']' 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70809 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70809 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.276 killing process with pid 70809 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70809' 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70809 00:12:18.276 04:03:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70809 00:12:18.276 [2024-12-06 04:03:11.582599] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:18.276 [2024-12-06 04:03:11.582741] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.276 [2024-12-06 04:03:11.582855] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.276 [2024-12-06 04:03:11.582868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:18.841 [2024-12-06 04:03:12.114995] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:20.224 04:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:20.224 00:12:20.224 real 0m5.956s 00:12:20.224 user 0m8.187s 00:12:20.224 sys 0m0.947s 00:12:20.224 04:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.224 04:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.224 ************************************ 00:12:20.224 END TEST raid_superblock_test 00:12:20.224 ************************************ 00:12:20.514 04:03:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:12:20.514 04:03:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:20.514 04:03:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.514 04:03:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:20.514 ************************************ 00:12:20.514 START TEST raid_read_error_test 00:12:20.514 ************************************ 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rLHGNZNPC6 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71075 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71075 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71075 ']' 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.514 04:03:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.514 [2024-12-06 04:03:13.730759] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:12:20.514 [2024-12-06 04:03:13.730917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71075 ] 00:12:20.771 [2024-12-06 04:03:13.915486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.771 [2024-12-06 04:03:14.073817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.029 [2024-12-06 04:03:14.346308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.029 [2024-12-06 04:03:14.346385] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.594 BaseBdev1_malloc 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.594 true 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.594 [2024-12-06 04:03:14.744424] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:21.594 [2024-12-06 04:03:14.744506] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.594 [2024-12-06 04:03:14.744533] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:21.594 [2024-12-06 04:03:14.744549] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.594 [2024-12-06 04:03:14.747387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.594 [2024-12-06 04:03:14.747438] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:21.594 BaseBdev1 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.594 BaseBdev2_malloc 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.594 true 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.594 [2024-12-06 04:03:14.824602] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:21.594 [2024-12-06 04:03:14.824703] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.594 [2024-12-06 04:03:14.824732] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:21.594 [2024-12-06 04:03:14.824748] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.594 [2024-12-06 04:03:14.827755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.594 [2024-12-06 04:03:14.827821] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:21.594 BaseBdev2 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.594 BaseBdev3_malloc 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.594 true 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.594 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.594 [2024-12-06 04:03:14.917499] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:21.594 [2024-12-06 04:03:14.917578] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.595 [2024-12-06 04:03:14.917603] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:21.595 [2024-12-06 04:03:14.917617] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.595 [2024-12-06 04:03:14.920575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.595 [2024-12-06 04:03:14.920627] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:21.595 BaseBdev3 00:12:21.595 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.595 04:03:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:21.595 04:03:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:21.595 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.595 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.854 BaseBdev4_malloc 00:12:21.854 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.854 04:03:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:21.854 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.854 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.854 true 00:12:21.854 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.854 04:03:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:21.854 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.854 04:03:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.854 [2024-12-06 04:03:15.001768] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:21.854 [2024-12-06 04:03:15.001860] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.854 [2024-12-06 04:03:15.001889] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:21.854 [2024-12-06 04:03:15.001903] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.854 [2024-12-06 04:03:15.005070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.854 [2024-12-06 04:03:15.005131] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:21.854 BaseBdev4 00:12:21.854 04:03:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.854 04:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:21.854 04:03:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.854 04:03:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.854 [2024-12-06 04:03:15.013834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.854 [2024-12-06 04:03:15.016312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.854 [2024-12-06 04:03:15.016417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:21.854 [2024-12-06 04:03:15.016497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:21.854 [2024-12-06 04:03:15.016778] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:21.854 [2024-12-06 04:03:15.016800] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:21.854 [2024-12-06 04:03:15.017129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:21.854 [2024-12-06 04:03:15.017338] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:21.854 [2024-12-06 04:03:15.017352] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:21.854 [2024-12-06 04:03:15.017567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.854 04:03:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.854 04:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:21.854 04:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.854 04:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.854 04:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:21.854 04:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.854 04:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.854 04:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.854 04:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.854 04:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.854 04:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.854 04:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.854 04:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.854 04:03:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.854 04:03:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.854 04:03:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.854 04:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.854 "name": "raid_bdev1", 00:12:21.854 "uuid": "a150a636-f75a-4f13-a1ce-33b46a818bc9", 00:12:21.854 "strip_size_kb": 64, 00:12:21.854 "state": "online", 00:12:21.854 "raid_level": "raid0", 00:12:21.854 "superblock": true, 00:12:21.854 "num_base_bdevs": 4, 00:12:21.854 "num_base_bdevs_discovered": 4, 00:12:21.854 "num_base_bdevs_operational": 4, 00:12:21.854 "base_bdevs_list": [ 00:12:21.854 { 00:12:21.854 "name": "BaseBdev1", 00:12:21.854 "uuid": "fce63afe-dea9-5d51-a622-7a9394daebbd", 00:12:21.854 "is_configured": true, 00:12:21.854 "data_offset": 2048, 00:12:21.854 "data_size": 63488 00:12:21.854 }, 00:12:21.854 { 00:12:21.854 "name": "BaseBdev2", 00:12:21.854 "uuid": "0b1c0f5b-61ed-500d-912e-9ce148f50c9f", 00:12:21.854 "is_configured": true, 00:12:21.854 "data_offset": 2048, 00:12:21.854 "data_size": 63488 00:12:21.854 }, 00:12:21.854 { 00:12:21.854 "name": "BaseBdev3", 00:12:21.854 "uuid": "6e46e098-716b-5448-ae48-7a0fca2238ab", 00:12:21.854 "is_configured": true, 00:12:21.854 "data_offset": 2048, 00:12:21.854 "data_size": 63488 00:12:21.854 }, 00:12:21.854 { 00:12:21.854 "name": "BaseBdev4", 00:12:21.854 "uuid": "920201db-a665-56fb-832e-d871e9e63c10", 00:12:21.854 "is_configured": true, 00:12:21.854 "data_offset": 2048, 00:12:21.854 "data_size": 63488 00:12:21.854 } 00:12:21.854 ] 00:12:21.854 }' 00:12:21.854 04:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.854 04:03:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.111 04:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:22.111 04:03:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:22.368 [2024-12-06 04:03:15.562521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.299 "name": "raid_bdev1", 00:12:23.299 "uuid": "a150a636-f75a-4f13-a1ce-33b46a818bc9", 00:12:23.299 "strip_size_kb": 64, 00:12:23.299 "state": "online", 00:12:23.299 "raid_level": "raid0", 00:12:23.299 "superblock": true, 00:12:23.299 "num_base_bdevs": 4, 00:12:23.299 "num_base_bdevs_discovered": 4, 00:12:23.299 "num_base_bdevs_operational": 4, 00:12:23.299 "base_bdevs_list": [ 00:12:23.299 { 00:12:23.299 "name": "BaseBdev1", 00:12:23.299 "uuid": "fce63afe-dea9-5d51-a622-7a9394daebbd", 00:12:23.299 "is_configured": true, 00:12:23.299 "data_offset": 2048, 00:12:23.299 "data_size": 63488 00:12:23.299 }, 00:12:23.299 { 00:12:23.299 "name": "BaseBdev2", 00:12:23.299 "uuid": "0b1c0f5b-61ed-500d-912e-9ce148f50c9f", 00:12:23.299 "is_configured": true, 00:12:23.299 "data_offset": 2048, 00:12:23.299 "data_size": 63488 00:12:23.299 }, 00:12:23.299 { 00:12:23.299 "name": "BaseBdev3", 00:12:23.299 "uuid": "6e46e098-716b-5448-ae48-7a0fca2238ab", 00:12:23.299 "is_configured": true, 00:12:23.299 "data_offset": 2048, 00:12:23.299 "data_size": 63488 00:12:23.299 }, 00:12:23.299 { 00:12:23.299 "name": "BaseBdev4", 00:12:23.299 "uuid": "920201db-a665-56fb-832e-d871e9e63c10", 00:12:23.299 "is_configured": true, 00:12:23.299 "data_offset": 2048, 00:12:23.299 "data_size": 63488 00:12:23.299 } 00:12:23.299 ] 00:12:23.299 }' 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.299 04:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.557 04:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:23.557 04:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.557 04:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.557 [2024-12-06 04:03:16.902165] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:23.557 [2024-12-06 04:03:16.902213] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.557 [2024-12-06 04:03:16.905643] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.557 [2024-12-06 04:03:16.905782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.557 [2024-12-06 04:03:16.905845] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.557 [2024-12-06 04:03:16.905862] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:23.557 { 00:12:23.557 "results": [ 00:12:23.557 { 00:12:23.557 "job": "raid_bdev1", 00:12:23.557 "core_mask": "0x1", 00:12:23.557 "workload": "randrw", 00:12:23.557 "percentage": 50, 00:12:23.557 "status": "finished", 00:12:23.557 "queue_depth": 1, 00:12:23.557 "io_size": 131072, 00:12:23.557 "runtime": 1.339766, 00:12:23.557 "iops": 11189.26737952747, 00:12:23.557 "mibps": 1398.6584224409337, 00:12:23.557 "io_failed": 1, 00:12:23.557 "io_timeout": 0, 00:12:23.557 "avg_latency_us": 125.10840040452436, 00:12:23.557 "min_latency_us": 33.08995633187773, 00:12:23.557 "max_latency_us": 1738.564192139738 00:12:23.557 } 00:12:23.557 ], 00:12:23.557 "core_count": 1 00:12:23.557 } 00:12:23.557 04:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.557 04:03:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71075 00:12:23.557 04:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71075 ']' 00:12:23.557 04:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71075 00:12:23.815 04:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:23.815 04:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:23.815 04:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71075 00:12:23.815 killing process with pid 71075 00:12:23.815 04:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:23.815 04:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:23.815 04:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71075' 00:12:23.815 04:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71075 00:12:23.815 [2024-12-06 04:03:16.938621] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:23.815 04:03:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71075 00:12:24.072 [2024-12-06 04:03:17.376469] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:25.971 04:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rLHGNZNPC6 00:12:25.971 04:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:25.971 04:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:25.971 04:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:12:25.971 04:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:25.971 04:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:25.971 04:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:25.971 04:03:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:12:25.971 00:12:25.971 real 0m5.331s 00:12:25.971 user 0m6.177s 00:12:25.971 sys 0m0.652s 00:12:25.971 04:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.971 ************************************ 00:12:25.971 END TEST raid_read_error_test 00:12:25.971 ************************************ 00:12:25.971 04:03:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.971 04:03:18 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:12:25.971 04:03:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:25.971 04:03:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.971 04:03:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:25.971 ************************************ 00:12:25.971 START TEST raid_write_error_test 00:12:25.971 ************************************ 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:25.971 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:25.972 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:25.972 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EADIr0B5pU 00:12:25.972 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:25.972 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71228 00:12:25.972 04:03:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71228 00:12:25.972 04:03:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71228 ']' 00:12:25.972 04:03:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.972 04:03:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.972 04:03:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.972 04:03:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.972 04:03:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.972 [2024-12-06 04:03:19.111218] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:12:25.972 [2024-12-06 04:03:19.111535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71228 ] 00:12:25.972 [2024-12-06 04:03:19.304412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.231 [2024-12-06 04:03:19.467435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.488 [2024-12-06 04:03:19.757220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.488 [2024-12-06 04:03:19.757378] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.748 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:26.748 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:26.748 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:26.748 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:26.748 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.748 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.748 BaseBdev1_malloc 00:12:26.748 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.748 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:26.748 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.748 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.748 true 00:12:26.748 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.748 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:26.748 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.748 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.748 [2024-12-06 04:03:20.080942] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:26.748 [2024-12-06 04:03:20.081021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.748 [2024-12-06 04:03:20.081061] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:26.748 [2024-12-06 04:03:20.081076] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.748 [2024-12-06 04:03:20.083908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.748 [2024-12-06 04:03:20.083961] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:26.748 BaseBdev1 00:12:26.748 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.748 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:26.748 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:26.748 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.748 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.008 BaseBdev2_malloc 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.008 true 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.008 [2024-12-06 04:03:20.162321] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:27.008 [2024-12-06 04:03:20.162409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.008 [2024-12-06 04:03:20.162435] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:27.008 [2024-12-06 04:03:20.162449] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.008 [2024-12-06 04:03:20.165362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.008 [2024-12-06 04:03:20.165414] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:27.008 BaseBdev2 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.008 BaseBdev3_malloc 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.008 true 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.008 [2024-12-06 04:03:20.258456] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:27.008 [2024-12-06 04:03:20.258643] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.008 [2024-12-06 04:03:20.258716] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:27.008 [2024-12-06 04:03:20.258761] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.008 [2024-12-06 04:03:20.261839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.008 [2024-12-06 04:03:20.261948] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:27.008 BaseBdev3 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.008 BaseBdev4_malloc 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.008 true 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.008 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.008 [2024-12-06 04:03:20.341230] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:27.008 [2024-12-06 04:03:20.341409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.008 [2024-12-06 04:03:20.341481] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:27.008 [2024-12-06 04:03:20.341528] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.008 [2024-12-06 04:03:20.344460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.008 [2024-12-06 04:03:20.344557] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:27.009 BaseBdev4 00:12:27.009 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.009 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:27.009 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.009 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.009 [2024-12-06 04:03:20.353469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:27.009 [2024-12-06 04:03:20.355936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:27.009 [2024-12-06 04:03:20.356101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:27.009 [2024-12-06 04:03:20.356187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:27.009 [2024-12-06 04:03:20.356485] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:27.009 [2024-12-06 04:03:20.356510] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:27.009 [2024-12-06 04:03:20.356847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:27.009 [2024-12-06 04:03:20.357071] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:27.009 [2024-12-06 04:03:20.357087] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:27.009 [2024-12-06 04:03:20.357362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.009 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.009 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:27.009 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.009 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.009 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:27.009 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.009 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.009 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.009 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.282 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.282 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.282 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.282 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.282 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.282 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.282 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.282 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.282 "name": "raid_bdev1", 00:12:27.282 "uuid": "f692d2e5-1a67-4914-a487-54218a4d9ee1", 00:12:27.282 "strip_size_kb": 64, 00:12:27.282 "state": "online", 00:12:27.282 "raid_level": "raid0", 00:12:27.282 "superblock": true, 00:12:27.282 "num_base_bdevs": 4, 00:12:27.282 "num_base_bdevs_discovered": 4, 00:12:27.282 "num_base_bdevs_operational": 4, 00:12:27.282 "base_bdevs_list": [ 00:12:27.282 { 00:12:27.282 "name": "BaseBdev1", 00:12:27.282 "uuid": "70e55969-53a2-5f43-9f13-b84cdf4a9fd8", 00:12:27.282 "is_configured": true, 00:12:27.282 "data_offset": 2048, 00:12:27.282 "data_size": 63488 00:12:27.282 }, 00:12:27.282 { 00:12:27.282 "name": "BaseBdev2", 00:12:27.282 "uuid": "292d24a6-f419-5a5e-be7b-12c98ec2a672", 00:12:27.282 "is_configured": true, 00:12:27.282 "data_offset": 2048, 00:12:27.282 "data_size": 63488 00:12:27.282 }, 00:12:27.282 { 00:12:27.282 "name": "BaseBdev3", 00:12:27.282 "uuid": "efc76816-0395-5315-98b7-e065069ac46c", 00:12:27.282 "is_configured": true, 00:12:27.282 "data_offset": 2048, 00:12:27.282 "data_size": 63488 00:12:27.282 }, 00:12:27.282 { 00:12:27.282 "name": "BaseBdev4", 00:12:27.282 "uuid": "d01514e9-6d3d-5770-8306-6df4a15666a0", 00:12:27.282 "is_configured": true, 00:12:27.282 "data_offset": 2048, 00:12:27.282 "data_size": 63488 00:12:27.282 } 00:12:27.282 ] 00:12:27.282 }' 00:12:27.282 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.282 04:03:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.563 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:27.563 04:03:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:27.822 [2024-12-06 04:03:20.934093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.759 "name": "raid_bdev1", 00:12:28.759 "uuid": "f692d2e5-1a67-4914-a487-54218a4d9ee1", 00:12:28.759 "strip_size_kb": 64, 00:12:28.759 "state": "online", 00:12:28.759 "raid_level": "raid0", 00:12:28.759 "superblock": true, 00:12:28.759 "num_base_bdevs": 4, 00:12:28.759 "num_base_bdevs_discovered": 4, 00:12:28.759 "num_base_bdevs_operational": 4, 00:12:28.759 "base_bdevs_list": [ 00:12:28.759 { 00:12:28.759 "name": "BaseBdev1", 00:12:28.759 "uuid": "70e55969-53a2-5f43-9f13-b84cdf4a9fd8", 00:12:28.759 "is_configured": true, 00:12:28.759 "data_offset": 2048, 00:12:28.759 "data_size": 63488 00:12:28.759 }, 00:12:28.759 { 00:12:28.759 "name": "BaseBdev2", 00:12:28.759 "uuid": "292d24a6-f419-5a5e-be7b-12c98ec2a672", 00:12:28.759 "is_configured": true, 00:12:28.759 "data_offset": 2048, 00:12:28.759 "data_size": 63488 00:12:28.759 }, 00:12:28.759 { 00:12:28.759 "name": "BaseBdev3", 00:12:28.759 "uuid": "efc76816-0395-5315-98b7-e065069ac46c", 00:12:28.759 "is_configured": true, 00:12:28.759 "data_offset": 2048, 00:12:28.759 "data_size": 63488 00:12:28.759 }, 00:12:28.759 { 00:12:28.759 "name": "BaseBdev4", 00:12:28.759 "uuid": "d01514e9-6d3d-5770-8306-6df4a15666a0", 00:12:28.759 "is_configured": true, 00:12:28.759 "data_offset": 2048, 00:12:28.759 "data_size": 63488 00:12:28.759 } 00:12:28.759 ] 00:12:28.759 }' 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.759 04:03:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.018 04:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:29.018 04:03:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.018 04:03:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.018 [2024-12-06 04:03:22.264759] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.018 [2024-12-06 04:03:22.264911] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.018 [2024-12-06 04:03:22.268492] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.018 [2024-12-06 04:03:22.268608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.019 [2024-12-06 04:03:22.268698] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.019 [2024-12-06 04:03:22.268756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:29.019 04:03:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.019 { 00:12:29.019 "results": [ 00:12:29.019 { 00:12:29.019 "job": "raid_bdev1", 00:12:29.019 "core_mask": "0x1", 00:12:29.019 "workload": "randrw", 00:12:29.019 "percentage": 50, 00:12:29.019 "status": "finished", 00:12:29.019 "queue_depth": 1, 00:12:29.019 "io_size": 131072, 00:12:29.019 "runtime": 1.330929, 00:12:29.019 "iops": 11095.257523128581, 00:12:29.019 "mibps": 1386.9071903910726, 00:12:29.019 "io_failed": 1, 00:12:29.019 "io_timeout": 0, 00:12:29.019 "avg_latency_us": 126.11897221420563, 00:12:29.019 "min_latency_us": 34.20786026200874, 00:12:29.019 "max_latency_us": 1788.646288209607 00:12:29.019 } 00:12:29.019 ], 00:12:29.019 "core_count": 1 00:12:29.019 } 00:12:29.019 04:03:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71228 00:12:29.019 04:03:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71228 ']' 00:12:29.019 04:03:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71228 00:12:29.019 04:03:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:29.019 04:03:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.019 04:03:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71228 00:12:29.019 04:03:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.019 04:03:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.019 04:03:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71228' 00:12:29.019 killing process with pid 71228 00:12:29.019 04:03:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71228 00:12:29.019 [2024-12-06 04:03:22.304606] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:29.019 04:03:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71228 00:12:29.588 [2024-12-06 04:03:22.740257] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:30.963 04:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EADIr0B5pU 00:12:30.963 04:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:30.963 04:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:30.963 04:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:12:30.963 04:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:30.963 04:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:30.963 04:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:30.963 04:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:12:30.963 ************************************ 00:12:30.963 END TEST raid_write_error_test 00:12:30.963 ************************************ 00:12:30.963 00:12:30.963 real 0m5.288s 00:12:30.963 user 0m6.049s 00:12:30.963 sys 0m0.744s 00:12:30.963 04:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.963 04:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.963 04:03:24 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:30.963 04:03:24 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:12:30.963 04:03:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:30.963 04:03:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.963 04:03:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:31.223 ************************************ 00:12:31.223 START TEST raid_state_function_test 00:12:31.223 ************************************ 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71377 00:12:31.223 Process raid pid: 71377 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71377' 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71377 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71377 ']' 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.223 04:03:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.223 [2024-12-06 04:03:24.434969] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:12:31.223 [2024-12-06 04:03:24.435165] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.482 [2024-12-06 04:03:24.625138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.482 [2024-12-06 04:03:24.792828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.740 [2024-12-06 04:03:25.076603] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.740 [2024-12-06 04:03:25.076767] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.998 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.998 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:31.998 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:31.998 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.998 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.998 [2024-12-06 04:03:25.349212] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:31.998 [2024-12-06 04:03:25.349294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:31.998 [2024-12-06 04:03:25.349309] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:31.998 [2024-12-06 04:03:25.349322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:31.998 [2024-12-06 04:03:25.349330] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:31.998 [2024-12-06 04:03:25.349342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:31.998 [2024-12-06 04:03:25.349349] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:31.998 [2024-12-06 04:03:25.349361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:32.256 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.256 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:32.256 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.256 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.256 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:32.256 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.256 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.256 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.256 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.256 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.256 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.256 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.256 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.256 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.256 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.256 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.256 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.256 "name": "Existed_Raid", 00:12:32.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.256 "strip_size_kb": 64, 00:12:32.256 "state": "configuring", 00:12:32.256 "raid_level": "concat", 00:12:32.256 "superblock": false, 00:12:32.256 "num_base_bdevs": 4, 00:12:32.256 "num_base_bdevs_discovered": 0, 00:12:32.256 "num_base_bdevs_operational": 4, 00:12:32.256 "base_bdevs_list": [ 00:12:32.256 { 00:12:32.256 "name": "BaseBdev1", 00:12:32.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.256 "is_configured": false, 00:12:32.256 "data_offset": 0, 00:12:32.256 "data_size": 0 00:12:32.256 }, 00:12:32.256 { 00:12:32.256 "name": "BaseBdev2", 00:12:32.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.256 "is_configured": false, 00:12:32.256 "data_offset": 0, 00:12:32.256 "data_size": 0 00:12:32.256 }, 00:12:32.256 { 00:12:32.256 "name": "BaseBdev3", 00:12:32.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.256 "is_configured": false, 00:12:32.256 "data_offset": 0, 00:12:32.256 "data_size": 0 00:12:32.256 }, 00:12:32.256 { 00:12:32.256 "name": "BaseBdev4", 00:12:32.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.256 "is_configured": false, 00:12:32.256 "data_offset": 0, 00:12:32.256 "data_size": 0 00:12:32.256 } 00:12:32.256 ] 00:12:32.256 }' 00:12:32.256 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.256 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.514 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:32.514 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.514 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.514 [2024-12-06 04:03:25.800396] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:32.514 [2024-12-06 04:03:25.800465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:32.514 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.514 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:32.514 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.514 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.514 [2024-12-06 04:03:25.812414] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:32.514 [2024-12-06 04:03:25.812538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:32.514 [2024-12-06 04:03:25.812576] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:32.514 [2024-12-06 04:03:25.812608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:32.514 [2024-12-06 04:03:25.812646] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:32.514 [2024-12-06 04:03:25.812674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:32.514 [2024-12-06 04:03:25.812714] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:32.514 [2024-12-06 04:03:25.812743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:32.514 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.514 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:32.514 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.514 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.771 [2024-12-06 04:03:25.874510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.771 BaseBdev1 00:12:32.771 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.771 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:32.771 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:32.771 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:32.771 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:32.771 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:32.771 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:32.771 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:32.771 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.772 [ 00:12:32.772 { 00:12:32.772 "name": "BaseBdev1", 00:12:32.772 "aliases": [ 00:12:32.772 "4a778f0b-149a-4b45-af0c-900fc742a369" 00:12:32.772 ], 00:12:32.772 "product_name": "Malloc disk", 00:12:32.772 "block_size": 512, 00:12:32.772 "num_blocks": 65536, 00:12:32.772 "uuid": "4a778f0b-149a-4b45-af0c-900fc742a369", 00:12:32.772 "assigned_rate_limits": { 00:12:32.772 "rw_ios_per_sec": 0, 00:12:32.772 "rw_mbytes_per_sec": 0, 00:12:32.772 "r_mbytes_per_sec": 0, 00:12:32.772 "w_mbytes_per_sec": 0 00:12:32.772 }, 00:12:32.772 "claimed": true, 00:12:32.772 "claim_type": "exclusive_write", 00:12:32.772 "zoned": false, 00:12:32.772 "supported_io_types": { 00:12:32.772 "read": true, 00:12:32.772 "write": true, 00:12:32.772 "unmap": true, 00:12:32.772 "flush": true, 00:12:32.772 "reset": true, 00:12:32.772 "nvme_admin": false, 00:12:32.772 "nvme_io": false, 00:12:32.772 "nvme_io_md": false, 00:12:32.772 "write_zeroes": true, 00:12:32.772 "zcopy": true, 00:12:32.772 "get_zone_info": false, 00:12:32.772 "zone_management": false, 00:12:32.772 "zone_append": false, 00:12:32.772 "compare": false, 00:12:32.772 "compare_and_write": false, 00:12:32.772 "abort": true, 00:12:32.772 "seek_hole": false, 00:12:32.772 "seek_data": false, 00:12:32.772 "copy": true, 00:12:32.772 "nvme_iov_md": false 00:12:32.772 }, 00:12:32.772 "memory_domains": [ 00:12:32.772 { 00:12:32.772 "dma_device_id": "system", 00:12:32.772 "dma_device_type": 1 00:12:32.772 }, 00:12:32.772 { 00:12:32.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.772 "dma_device_type": 2 00:12:32.772 } 00:12:32.772 ], 00:12:32.772 "driver_specific": {} 00:12:32.772 } 00:12:32.772 ] 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.772 "name": "Existed_Raid", 00:12:32.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.772 "strip_size_kb": 64, 00:12:32.772 "state": "configuring", 00:12:32.772 "raid_level": "concat", 00:12:32.772 "superblock": false, 00:12:32.772 "num_base_bdevs": 4, 00:12:32.772 "num_base_bdevs_discovered": 1, 00:12:32.772 "num_base_bdevs_operational": 4, 00:12:32.772 "base_bdevs_list": [ 00:12:32.772 { 00:12:32.772 "name": "BaseBdev1", 00:12:32.772 "uuid": "4a778f0b-149a-4b45-af0c-900fc742a369", 00:12:32.772 "is_configured": true, 00:12:32.772 "data_offset": 0, 00:12:32.772 "data_size": 65536 00:12:32.772 }, 00:12:32.772 { 00:12:32.772 "name": "BaseBdev2", 00:12:32.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.772 "is_configured": false, 00:12:32.772 "data_offset": 0, 00:12:32.772 "data_size": 0 00:12:32.772 }, 00:12:32.772 { 00:12:32.772 "name": "BaseBdev3", 00:12:32.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.772 "is_configured": false, 00:12:32.772 "data_offset": 0, 00:12:32.772 "data_size": 0 00:12:32.772 }, 00:12:32.772 { 00:12:32.772 "name": "BaseBdev4", 00:12:32.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.772 "is_configured": false, 00:12:32.772 "data_offset": 0, 00:12:32.772 "data_size": 0 00:12:32.772 } 00:12:32.772 ] 00:12:32.772 }' 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.772 04:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.031 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:33.031 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.031 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.289 [2024-12-06 04:03:26.385812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:33.289 [2024-12-06 04:03:26.385985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:33.289 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.289 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:33.289 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.289 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.289 [2024-12-06 04:03:26.393873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:33.289 [2024-12-06 04:03:26.396585] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:33.289 [2024-12-06 04:03:26.396693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:33.290 [2024-12-06 04:03:26.396740] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:33.290 [2024-12-06 04:03:26.396778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:33.290 [2024-12-06 04:03:26.396815] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:33.290 [2024-12-06 04:03:26.396847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:33.290 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.290 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:33.290 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:33.290 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:33.290 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.290 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.290 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:33.290 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.290 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.290 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.290 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.290 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.290 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.290 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.290 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.290 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.290 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.290 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.290 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.290 "name": "Existed_Raid", 00:12:33.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.290 "strip_size_kb": 64, 00:12:33.290 "state": "configuring", 00:12:33.290 "raid_level": "concat", 00:12:33.290 "superblock": false, 00:12:33.290 "num_base_bdevs": 4, 00:12:33.290 "num_base_bdevs_discovered": 1, 00:12:33.290 "num_base_bdevs_operational": 4, 00:12:33.290 "base_bdevs_list": [ 00:12:33.290 { 00:12:33.290 "name": "BaseBdev1", 00:12:33.290 "uuid": "4a778f0b-149a-4b45-af0c-900fc742a369", 00:12:33.290 "is_configured": true, 00:12:33.290 "data_offset": 0, 00:12:33.290 "data_size": 65536 00:12:33.290 }, 00:12:33.290 { 00:12:33.290 "name": "BaseBdev2", 00:12:33.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.290 "is_configured": false, 00:12:33.290 "data_offset": 0, 00:12:33.290 "data_size": 0 00:12:33.290 }, 00:12:33.290 { 00:12:33.290 "name": "BaseBdev3", 00:12:33.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.290 "is_configured": false, 00:12:33.290 "data_offset": 0, 00:12:33.290 "data_size": 0 00:12:33.290 }, 00:12:33.290 { 00:12:33.290 "name": "BaseBdev4", 00:12:33.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.290 "is_configured": false, 00:12:33.290 "data_offset": 0, 00:12:33.290 "data_size": 0 00:12:33.290 } 00:12:33.290 ] 00:12:33.290 }' 00:12:33.290 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.290 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.553 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:33.553 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.553 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.553 [2024-12-06 04:03:26.904637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:33.553 BaseBdev2 00:12:33.553 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.553 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:33.553 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:33.553 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:33.553 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:33.553 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:33.553 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:33.553 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:33.553 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.553 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.817 [ 00:12:33.817 { 00:12:33.817 "name": "BaseBdev2", 00:12:33.817 "aliases": [ 00:12:33.817 "8f719bcd-614d-4778-903e-5ffbedf6d2f0" 00:12:33.817 ], 00:12:33.817 "product_name": "Malloc disk", 00:12:33.817 "block_size": 512, 00:12:33.817 "num_blocks": 65536, 00:12:33.817 "uuid": "8f719bcd-614d-4778-903e-5ffbedf6d2f0", 00:12:33.817 "assigned_rate_limits": { 00:12:33.817 "rw_ios_per_sec": 0, 00:12:33.817 "rw_mbytes_per_sec": 0, 00:12:33.817 "r_mbytes_per_sec": 0, 00:12:33.817 "w_mbytes_per_sec": 0 00:12:33.817 }, 00:12:33.817 "claimed": true, 00:12:33.817 "claim_type": "exclusive_write", 00:12:33.817 "zoned": false, 00:12:33.817 "supported_io_types": { 00:12:33.817 "read": true, 00:12:33.817 "write": true, 00:12:33.817 "unmap": true, 00:12:33.817 "flush": true, 00:12:33.817 "reset": true, 00:12:33.817 "nvme_admin": false, 00:12:33.817 "nvme_io": false, 00:12:33.817 "nvme_io_md": false, 00:12:33.817 "write_zeroes": true, 00:12:33.817 "zcopy": true, 00:12:33.817 "get_zone_info": false, 00:12:33.817 "zone_management": false, 00:12:33.817 "zone_append": false, 00:12:33.817 "compare": false, 00:12:33.817 "compare_and_write": false, 00:12:33.817 "abort": true, 00:12:33.817 "seek_hole": false, 00:12:33.817 "seek_data": false, 00:12:33.817 "copy": true, 00:12:33.817 "nvme_iov_md": false 00:12:33.817 }, 00:12:33.817 "memory_domains": [ 00:12:33.817 { 00:12:33.817 "dma_device_id": "system", 00:12:33.817 "dma_device_type": 1 00:12:33.817 }, 00:12:33.817 { 00:12:33.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.817 "dma_device_type": 2 00:12:33.817 } 00:12:33.817 ], 00:12:33.817 "driver_specific": {} 00:12:33.817 } 00:12:33.817 ] 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.817 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.817 "name": "Existed_Raid", 00:12:33.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.817 "strip_size_kb": 64, 00:12:33.817 "state": "configuring", 00:12:33.817 "raid_level": "concat", 00:12:33.817 "superblock": false, 00:12:33.817 "num_base_bdevs": 4, 00:12:33.817 "num_base_bdevs_discovered": 2, 00:12:33.817 "num_base_bdevs_operational": 4, 00:12:33.817 "base_bdevs_list": [ 00:12:33.817 { 00:12:33.817 "name": "BaseBdev1", 00:12:33.817 "uuid": "4a778f0b-149a-4b45-af0c-900fc742a369", 00:12:33.817 "is_configured": true, 00:12:33.817 "data_offset": 0, 00:12:33.817 "data_size": 65536 00:12:33.817 }, 00:12:33.817 { 00:12:33.817 "name": "BaseBdev2", 00:12:33.817 "uuid": "8f719bcd-614d-4778-903e-5ffbedf6d2f0", 00:12:33.817 "is_configured": true, 00:12:33.817 "data_offset": 0, 00:12:33.817 "data_size": 65536 00:12:33.817 }, 00:12:33.817 { 00:12:33.817 "name": "BaseBdev3", 00:12:33.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.817 "is_configured": false, 00:12:33.817 "data_offset": 0, 00:12:33.817 "data_size": 0 00:12:33.818 }, 00:12:33.818 { 00:12:33.818 "name": "BaseBdev4", 00:12:33.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.818 "is_configured": false, 00:12:33.818 "data_offset": 0, 00:12:33.818 "data_size": 0 00:12:33.818 } 00:12:33.818 ] 00:12:33.818 }' 00:12:33.818 04:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.818 04:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.085 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:34.085 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.085 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.085 BaseBdev3 00:12:34.085 [2024-12-06 04:03:27.425418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:34.085 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.085 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:34.085 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:34.085 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:34.085 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:34.085 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:34.085 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:34.085 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:34.085 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.085 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.085 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.085 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:34.085 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.085 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.342 [ 00:12:34.342 { 00:12:34.342 "name": "BaseBdev3", 00:12:34.342 "aliases": [ 00:12:34.342 "0e5892e7-b258-42ac-9381-d3f0fde1d343" 00:12:34.342 ], 00:12:34.342 "product_name": "Malloc disk", 00:12:34.342 "block_size": 512, 00:12:34.342 "num_blocks": 65536, 00:12:34.342 "uuid": "0e5892e7-b258-42ac-9381-d3f0fde1d343", 00:12:34.342 "assigned_rate_limits": { 00:12:34.342 "rw_ios_per_sec": 0, 00:12:34.343 "rw_mbytes_per_sec": 0, 00:12:34.343 "r_mbytes_per_sec": 0, 00:12:34.343 "w_mbytes_per_sec": 0 00:12:34.343 }, 00:12:34.343 "claimed": true, 00:12:34.343 "claim_type": "exclusive_write", 00:12:34.343 "zoned": false, 00:12:34.343 "supported_io_types": { 00:12:34.343 "read": true, 00:12:34.343 "write": true, 00:12:34.343 "unmap": true, 00:12:34.343 "flush": true, 00:12:34.343 "reset": true, 00:12:34.343 "nvme_admin": false, 00:12:34.343 "nvme_io": false, 00:12:34.343 "nvme_io_md": false, 00:12:34.343 "write_zeroes": true, 00:12:34.343 "zcopy": true, 00:12:34.343 "get_zone_info": false, 00:12:34.343 "zone_management": false, 00:12:34.343 "zone_append": false, 00:12:34.343 "compare": false, 00:12:34.343 "compare_and_write": false, 00:12:34.343 "abort": true, 00:12:34.343 "seek_hole": false, 00:12:34.343 "seek_data": false, 00:12:34.343 "copy": true, 00:12:34.343 "nvme_iov_md": false 00:12:34.343 }, 00:12:34.343 "memory_domains": [ 00:12:34.343 { 00:12:34.343 "dma_device_id": "system", 00:12:34.343 "dma_device_type": 1 00:12:34.343 }, 00:12:34.343 { 00:12:34.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.343 "dma_device_type": 2 00:12:34.343 } 00:12:34.343 ], 00:12:34.343 "driver_specific": {} 00:12:34.343 } 00:12:34.343 ] 00:12:34.343 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.343 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:34.343 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:34.343 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:34.343 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:34.343 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.343 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.343 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:34.343 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.343 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.343 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.343 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.343 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.343 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.343 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.343 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.343 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.343 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.343 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.343 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.343 "name": "Existed_Raid", 00:12:34.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.343 "strip_size_kb": 64, 00:12:34.343 "state": "configuring", 00:12:34.343 "raid_level": "concat", 00:12:34.343 "superblock": false, 00:12:34.343 "num_base_bdevs": 4, 00:12:34.343 "num_base_bdevs_discovered": 3, 00:12:34.343 "num_base_bdevs_operational": 4, 00:12:34.343 "base_bdevs_list": [ 00:12:34.343 { 00:12:34.343 "name": "BaseBdev1", 00:12:34.343 "uuid": "4a778f0b-149a-4b45-af0c-900fc742a369", 00:12:34.343 "is_configured": true, 00:12:34.343 "data_offset": 0, 00:12:34.343 "data_size": 65536 00:12:34.343 }, 00:12:34.343 { 00:12:34.343 "name": "BaseBdev2", 00:12:34.343 "uuid": "8f719bcd-614d-4778-903e-5ffbedf6d2f0", 00:12:34.343 "is_configured": true, 00:12:34.343 "data_offset": 0, 00:12:34.343 "data_size": 65536 00:12:34.343 }, 00:12:34.343 { 00:12:34.343 "name": "BaseBdev3", 00:12:34.343 "uuid": "0e5892e7-b258-42ac-9381-d3f0fde1d343", 00:12:34.343 "is_configured": true, 00:12:34.343 "data_offset": 0, 00:12:34.343 "data_size": 65536 00:12:34.343 }, 00:12:34.343 { 00:12:34.343 "name": "BaseBdev4", 00:12:34.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.343 "is_configured": false, 00:12:34.343 "data_offset": 0, 00:12:34.343 "data_size": 0 00:12:34.343 } 00:12:34.343 ] 00:12:34.343 }' 00:12:34.343 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.343 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.600 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:34.600 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.600 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.857 [2024-12-06 04:03:27.967160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:34.857 [2024-12-06 04:03:27.967296] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:34.857 [2024-12-06 04:03:27.967341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:34.857 [2024-12-06 04:03:27.967704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:34.857 [2024-12-06 04:03:27.967942] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:34.857 [2024-12-06 04:03:27.967990] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:34.857 [2024-12-06 04:03:27.968328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.857 BaseBdev4 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.857 [ 00:12:34.857 { 00:12:34.857 "name": "BaseBdev4", 00:12:34.857 "aliases": [ 00:12:34.857 "8aa5377a-c171-44cf-8552-81445e310a6d" 00:12:34.857 ], 00:12:34.857 "product_name": "Malloc disk", 00:12:34.857 "block_size": 512, 00:12:34.857 "num_blocks": 65536, 00:12:34.857 "uuid": "8aa5377a-c171-44cf-8552-81445e310a6d", 00:12:34.857 "assigned_rate_limits": { 00:12:34.857 "rw_ios_per_sec": 0, 00:12:34.857 "rw_mbytes_per_sec": 0, 00:12:34.857 "r_mbytes_per_sec": 0, 00:12:34.857 "w_mbytes_per_sec": 0 00:12:34.857 }, 00:12:34.857 "claimed": true, 00:12:34.857 "claim_type": "exclusive_write", 00:12:34.857 "zoned": false, 00:12:34.857 "supported_io_types": { 00:12:34.857 "read": true, 00:12:34.857 "write": true, 00:12:34.857 "unmap": true, 00:12:34.857 "flush": true, 00:12:34.857 "reset": true, 00:12:34.857 "nvme_admin": false, 00:12:34.857 "nvme_io": false, 00:12:34.857 "nvme_io_md": false, 00:12:34.857 "write_zeroes": true, 00:12:34.857 "zcopy": true, 00:12:34.857 "get_zone_info": false, 00:12:34.857 "zone_management": false, 00:12:34.857 "zone_append": false, 00:12:34.857 "compare": false, 00:12:34.857 "compare_and_write": false, 00:12:34.857 "abort": true, 00:12:34.857 "seek_hole": false, 00:12:34.857 "seek_data": false, 00:12:34.857 "copy": true, 00:12:34.857 "nvme_iov_md": false 00:12:34.857 }, 00:12:34.857 "memory_domains": [ 00:12:34.857 { 00:12:34.857 "dma_device_id": "system", 00:12:34.857 "dma_device_type": 1 00:12:34.857 }, 00:12:34.857 { 00:12:34.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.857 "dma_device_type": 2 00:12:34.857 } 00:12:34.857 ], 00:12:34.857 "driver_specific": {} 00:12:34.857 } 00:12:34.857 ] 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.857 04:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.857 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.857 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.857 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.857 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.857 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.857 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.857 "name": "Existed_Raid", 00:12:34.857 "uuid": "61e92274-1a50-493a-9103-c7414dc28b71", 00:12:34.857 "strip_size_kb": 64, 00:12:34.857 "state": "online", 00:12:34.857 "raid_level": "concat", 00:12:34.857 "superblock": false, 00:12:34.857 "num_base_bdevs": 4, 00:12:34.857 "num_base_bdevs_discovered": 4, 00:12:34.857 "num_base_bdevs_operational": 4, 00:12:34.857 "base_bdevs_list": [ 00:12:34.857 { 00:12:34.857 "name": "BaseBdev1", 00:12:34.857 "uuid": "4a778f0b-149a-4b45-af0c-900fc742a369", 00:12:34.857 "is_configured": true, 00:12:34.857 "data_offset": 0, 00:12:34.857 "data_size": 65536 00:12:34.857 }, 00:12:34.857 { 00:12:34.857 "name": "BaseBdev2", 00:12:34.857 "uuid": "8f719bcd-614d-4778-903e-5ffbedf6d2f0", 00:12:34.857 "is_configured": true, 00:12:34.857 "data_offset": 0, 00:12:34.857 "data_size": 65536 00:12:34.857 }, 00:12:34.857 { 00:12:34.857 "name": "BaseBdev3", 00:12:34.857 "uuid": "0e5892e7-b258-42ac-9381-d3f0fde1d343", 00:12:34.857 "is_configured": true, 00:12:34.857 "data_offset": 0, 00:12:34.857 "data_size": 65536 00:12:34.857 }, 00:12:34.857 { 00:12:34.857 "name": "BaseBdev4", 00:12:34.857 "uuid": "8aa5377a-c171-44cf-8552-81445e310a6d", 00:12:34.857 "is_configured": true, 00:12:34.857 "data_offset": 0, 00:12:34.857 "data_size": 65536 00:12:34.857 } 00:12:34.857 ] 00:12:34.857 }' 00:12:34.857 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.857 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.114 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:35.114 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:35.114 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:35.114 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:35.114 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:35.114 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:35.114 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:35.114 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.114 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.114 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:35.114 [2024-12-06 04:03:28.466875] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:35.372 "name": "Existed_Raid", 00:12:35.372 "aliases": [ 00:12:35.372 "61e92274-1a50-493a-9103-c7414dc28b71" 00:12:35.372 ], 00:12:35.372 "product_name": "Raid Volume", 00:12:35.372 "block_size": 512, 00:12:35.372 "num_blocks": 262144, 00:12:35.372 "uuid": "61e92274-1a50-493a-9103-c7414dc28b71", 00:12:35.372 "assigned_rate_limits": { 00:12:35.372 "rw_ios_per_sec": 0, 00:12:35.372 "rw_mbytes_per_sec": 0, 00:12:35.372 "r_mbytes_per_sec": 0, 00:12:35.372 "w_mbytes_per_sec": 0 00:12:35.372 }, 00:12:35.372 "claimed": false, 00:12:35.372 "zoned": false, 00:12:35.372 "supported_io_types": { 00:12:35.372 "read": true, 00:12:35.372 "write": true, 00:12:35.372 "unmap": true, 00:12:35.372 "flush": true, 00:12:35.372 "reset": true, 00:12:35.372 "nvme_admin": false, 00:12:35.372 "nvme_io": false, 00:12:35.372 "nvme_io_md": false, 00:12:35.372 "write_zeroes": true, 00:12:35.372 "zcopy": false, 00:12:35.372 "get_zone_info": false, 00:12:35.372 "zone_management": false, 00:12:35.372 "zone_append": false, 00:12:35.372 "compare": false, 00:12:35.372 "compare_and_write": false, 00:12:35.372 "abort": false, 00:12:35.372 "seek_hole": false, 00:12:35.372 "seek_data": false, 00:12:35.372 "copy": false, 00:12:35.372 "nvme_iov_md": false 00:12:35.372 }, 00:12:35.372 "memory_domains": [ 00:12:35.372 { 00:12:35.372 "dma_device_id": "system", 00:12:35.372 "dma_device_type": 1 00:12:35.372 }, 00:12:35.372 { 00:12:35.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.372 "dma_device_type": 2 00:12:35.372 }, 00:12:35.372 { 00:12:35.372 "dma_device_id": "system", 00:12:35.372 "dma_device_type": 1 00:12:35.372 }, 00:12:35.372 { 00:12:35.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.372 "dma_device_type": 2 00:12:35.372 }, 00:12:35.372 { 00:12:35.372 "dma_device_id": "system", 00:12:35.372 "dma_device_type": 1 00:12:35.372 }, 00:12:35.372 { 00:12:35.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.372 "dma_device_type": 2 00:12:35.372 }, 00:12:35.372 { 00:12:35.372 "dma_device_id": "system", 00:12:35.372 "dma_device_type": 1 00:12:35.372 }, 00:12:35.372 { 00:12:35.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.372 "dma_device_type": 2 00:12:35.372 } 00:12:35.372 ], 00:12:35.372 "driver_specific": { 00:12:35.372 "raid": { 00:12:35.372 "uuid": "61e92274-1a50-493a-9103-c7414dc28b71", 00:12:35.372 "strip_size_kb": 64, 00:12:35.372 "state": "online", 00:12:35.372 "raid_level": "concat", 00:12:35.372 "superblock": false, 00:12:35.372 "num_base_bdevs": 4, 00:12:35.372 "num_base_bdevs_discovered": 4, 00:12:35.372 "num_base_bdevs_operational": 4, 00:12:35.372 "base_bdevs_list": [ 00:12:35.372 { 00:12:35.372 "name": "BaseBdev1", 00:12:35.372 "uuid": "4a778f0b-149a-4b45-af0c-900fc742a369", 00:12:35.372 "is_configured": true, 00:12:35.372 "data_offset": 0, 00:12:35.372 "data_size": 65536 00:12:35.372 }, 00:12:35.372 { 00:12:35.372 "name": "BaseBdev2", 00:12:35.372 "uuid": "8f719bcd-614d-4778-903e-5ffbedf6d2f0", 00:12:35.372 "is_configured": true, 00:12:35.372 "data_offset": 0, 00:12:35.372 "data_size": 65536 00:12:35.372 }, 00:12:35.372 { 00:12:35.372 "name": "BaseBdev3", 00:12:35.372 "uuid": "0e5892e7-b258-42ac-9381-d3f0fde1d343", 00:12:35.372 "is_configured": true, 00:12:35.372 "data_offset": 0, 00:12:35.372 "data_size": 65536 00:12:35.372 }, 00:12:35.372 { 00:12:35.372 "name": "BaseBdev4", 00:12:35.372 "uuid": "8aa5377a-c171-44cf-8552-81445e310a6d", 00:12:35.372 "is_configured": true, 00:12:35.372 "data_offset": 0, 00:12:35.372 "data_size": 65536 00:12:35.372 } 00:12:35.372 ] 00:12:35.372 } 00:12:35.372 } 00:12:35.372 }' 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:35.372 BaseBdev2 00:12:35.372 BaseBdev3 00:12:35.372 BaseBdev4' 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.372 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.629 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.629 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.629 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:35.629 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.629 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:35.629 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.629 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.629 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.629 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.629 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.629 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:35.629 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.629 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.630 [2024-12-06 04:03:28.801995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:35.630 [2024-12-06 04:03:28.802084] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:35.630 [2024-12-06 04:03:28.802172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.630 "name": "Existed_Raid", 00:12:35.630 "uuid": "61e92274-1a50-493a-9103-c7414dc28b71", 00:12:35.630 "strip_size_kb": 64, 00:12:35.630 "state": "offline", 00:12:35.630 "raid_level": "concat", 00:12:35.630 "superblock": false, 00:12:35.630 "num_base_bdevs": 4, 00:12:35.630 "num_base_bdevs_discovered": 3, 00:12:35.630 "num_base_bdevs_operational": 3, 00:12:35.630 "base_bdevs_list": [ 00:12:35.630 { 00:12:35.630 "name": null, 00:12:35.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.630 "is_configured": false, 00:12:35.630 "data_offset": 0, 00:12:35.630 "data_size": 65536 00:12:35.630 }, 00:12:35.630 { 00:12:35.630 "name": "BaseBdev2", 00:12:35.630 "uuid": "8f719bcd-614d-4778-903e-5ffbedf6d2f0", 00:12:35.630 "is_configured": true, 00:12:35.630 "data_offset": 0, 00:12:35.630 "data_size": 65536 00:12:35.630 }, 00:12:35.630 { 00:12:35.630 "name": "BaseBdev3", 00:12:35.630 "uuid": "0e5892e7-b258-42ac-9381-d3f0fde1d343", 00:12:35.630 "is_configured": true, 00:12:35.630 "data_offset": 0, 00:12:35.630 "data_size": 65536 00:12:35.630 }, 00:12:35.630 { 00:12:35.630 "name": "BaseBdev4", 00:12:35.630 "uuid": "8aa5377a-c171-44cf-8552-81445e310a6d", 00:12:35.630 "is_configured": true, 00:12:35.630 "data_offset": 0, 00:12:35.630 "data_size": 65536 00:12:35.630 } 00:12:35.630 ] 00:12:35.630 }' 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.630 04:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.195 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:36.195 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:36.195 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.195 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.195 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:36.195 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.195 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.195 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:36.195 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:36.195 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:36.195 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.195 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.195 [2024-12-06 04:03:29.396162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:36.195 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.195 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:36.195 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:36.195 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.195 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:36.195 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.195 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.195 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.453 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:36.453 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:36.453 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:36.453 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.453 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.453 [2024-12-06 04:03:29.565946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:36.453 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.453 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:36.453 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:36.453 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.453 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:36.453 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.453 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.453 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.453 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:36.453 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:36.453 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:36.453 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.453 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.453 [2024-12-06 04:03:29.730390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:36.453 [2024-12-06 04:03:29.730446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:36.710 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.710 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:36.710 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:36.710 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.710 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.710 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.710 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:36.710 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.710 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:36.710 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:36.710 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:36.710 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:36.710 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:36.710 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:36.710 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.710 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.710 BaseBdev2 00:12:36.710 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.710 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:36.710 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:36.710 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:36.711 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:36.711 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:36.711 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:36.711 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:36.711 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.711 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.711 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.711 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:36.711 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.711 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.711 [ 00:12:36.711 { 00:12:36.711 "name": "BaseBdev2", 00:12:36.711 "aliases": [ 00:12:36.711 "13027b74-2019-4f39-9102-e6e2de565d2a" 00:12:36.711 ], 00:12:36.711 "product_name": "Malloc disk", 00:12:36.711 "block_size": 512, 00:12:36.711 "num_blocks": 65536, 00:12:36.711 "uuid": "13027b74-2019-4f39-9102-e6e2de565d2a", 00:12:36.711 "assigned_rate_limits": { 00:12:36.711 "rw_ios_per_sec": 0, 00:12:36.711 "rw_mbytes_per_sec": 0, 00:12:36.711 "r_mbytes_per_sec": 0, 00:12:36.711 "w_mbytes_per_sec": 0 00:12:36.711 }, 00:12:36.711 "claimed": false, 00:12:36.711 "zoned": false, 00:12:36.711 "supported_io_types": { 00:12:36.711 "read": true, 00:12:36.711 "write": true, 00:12:36.711 "unmap": true, 00:12:36.711 "flush": true, 00:12:36.711 "reset": true, 00:12:36.711 "nvme_admin": false, 00:12:36.711 "nvme_io": false, 00:12:36.711 "nvme_io_md": false, 00:12:36.711 "write_zeroes": true, 00:12:36.711 "zcopy": true, 00:12:36.711 "get_zone_info": false, 00:12:36.711 "zone_management": false, 00:12:36.711 "zone_append": false, 00:12:36.711 "compare": false, 00:12:36.711 "compare_and_write": false, 00:12:36.711 "abort": true, 00:12:36.711 "seek_hole": false, 00:12:36.711 "seek_data": false, 00:12:36.711 "copy": true, 00:12:36.711 "nvme_iov_md": false 00:12:36.711 }, 00:12:36.711 "memory_domains": [ 00:12:36.711 { 00:12:36.711 "dma_device_id": "system", 00:12:36.711 "dma_device_type": 1 00:12:36.711 }, 00:12:36.711 { 00:12:36.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.711 "dma_device_type": 2 00:12:36.711 } 00:12:36.711 ], 00:12:36.711 "driver_specific": {} 00:12:36.711 } 00:12:36.711 ] 00:12:36.711 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.711 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:36.711 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:36.711 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:36.711 04:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:36.711 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.711 04:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.711 BaseBdev3 00:12:36.711 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.711 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:36.711 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:36.711 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:36.711 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:36.711 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:36.711 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:36.711 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:36.711 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.711 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.711 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.711 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:36.711 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.711 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.969 [ 00:12:36.969 { 00:12:36.969 "name": "BaseBdev3", 00:12:36.969 "aliases": [ 00:12:36.969 "57eb1b2b-0639-4853-9b43-4d841e1fb7c0" 00:12:36.969 ], 00:12:36.969 "product_name": "Malloc disk", 00:12:36.969 "block_size": 512, 00:12:36.969 "num_blocks": 65536, 00:12:36.969 "uuid": "57eb1b2b-0639-4853-9b43-4d841e1fb7c0", 00:12:36.969 "assigned_rate_limits": { 00:12:36.969 "rw_ios_per_sec": 0, 00:12:36.969 "rw_mbytes_per_sec": 0, 00:12:36.969 "r_mbytes_per_sec": 0, 00:12:36.969 "w_mbytes_per_sec": 0 00:12:36.969 }, 00:12:36.969 "claimed": false, 00:12:36.969 "zoned": false, 00:12:36.969 "supported_io_types": { 00:12:36.969 "read": true, 00:12:36.969 "write": true, 00:12:36.969 "unmap": true, 00:12:36.969 "flush": true, 00:12:36.969 "reset": true, 00:12:36.969 "nvme_admin": false, 00:12:36.969 "nvme_io": false, 00:12:36.969 "nvme_io_md": false, 00:12:36.969 "write_zeroes": true, 00:12:36.969 "zcopy": true, 00:12:36.969 "get_zone_info": false, 00:12:36.969 "zone_management": false, 00:12:36.969 "zone_append": false, 00:12:36.969 "compare": false, 00:12:36.969 "compare_and_write": false, 00:12:36.969 "abort": true, 00:12:36.969 "seek_hole": false, 00:12:36.969 "seek_data": false, 00:12:36.969 "copy": true, 00:12:36.969 "nvme_iov_md": false 00:12:36.969 }, 00:12:36.969 "memory_domains": [ 00:12:36.969 { 00:12:36.969 "dma_device_id": "system", 00:12:36.969 "dma_device_type": 1 00:12:36.969 }, 00:12:36.969 { 00:12:36.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.969 "dma_device_type": 2 00:12:36.969 } 00:12:36.969 ], 00:12:36.969 "driver_specific": {} 00:12:36.969 } 00:12:36.969 ] 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.969 BaseBdev4 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.969 [ 00:12:36.969 { 00:12:36.969 "name": "BaseBdev4", 00:12:36.969 "aliases": [ 00:12:36.969 "2ad522db-eafe-4262-85af-3c37f58f0d6a" 00:12:36.969 ], 00:12:36.969 "product_name": "Malloc disk", 00:12:36.969 "block_size": 512, 00:12:36.969 "num_blocks": 65536, 00:12:36.969 "uuid": "2ad522db-eafe-4262-85af-3c37f58f0d6a", 00:12:36.969 "assigned_rate_limits": { 00:12:36.969 "rw_ios_per_sec": 0, 00:12:36.969 "rw_mbytes_per_sec": 0, 00:12:36.969 "r_mbytes_per_sec": 0, 00:12:36.969 "w_mbytes_per_sec": 0 00:12:36.969 }, 00:12:36.969 "claimed": false, 00:12:36.969 "zoned": false, 00:12:36.969 "supported_io_types": { 00:12:36.969 "read": true, 00:12:36.969 "write": true, 00:12:36.969 "unmap": true, 00:12:36.969 "flush": true, 00:12:36.969 "reset": true, 00:12:36.969 "nvme_admin": false, 00:12:36.969 "nvme_io": false, 00:12:36.969 "nvme_io_md": false, 00:12:36.969 "write_zeroes": true, 00:12:36.969 "zcopy": true, 00:12:36.969 "get_zone_info": false, 00:12:36.969 "zone_management": false, 00:12:36.969 "zone_append": false, 00:12:36.969 "compare": false, 00:12:36.969 "compare_and_write": false, 00:12:36.969 "abort": true, 00:12:36.969 "seek_hole": false, 00:12:36.969 "seek_data": false, 00:12:36.969 "copy": true, 00:12:36.969 "nvme_iov_md": false 00:12:36.969 }, 00:12:36.969 "memory_domains": [ 00:12:36.969 { 00:12:36.969 "dma_device_id": "system", 00:12:36.969 "dma_device_type": 1 00:12:36.969 }, 00:12:36.969 { 00:12:36.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.969 "dma_device_type": 2 00:12:36.969 } 00:12:36.969 ], 00:12:36.969 "driver_specific": {} 00:12:36.969 } 00:12:36.969 ] 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.969 [2024-12-06 04:03:30.157773] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:36.969 [2024-12-06 04:03:30.157874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:36.969 [2024-12-06 04:03:30.157931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:36.969 [2024-12-06 04:03:30.160037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:36.969 [2024-12-06 04:03:30.160157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:36.969 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.970 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:36.970 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.970 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.970 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:36.970 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.970 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.970 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.970 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.970 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.970 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.970 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.970 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.970 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.970 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.970 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.970 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.970 "name": "Existed_Raid", 00:12:36.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.970 "strip_size_kb": 64, 00:12:36.970 "state": "configuring", 00:12:36.970 "raid_level": "concat", 00:12:36.970 "superblock": false, 00:12:36.970 "num_base_bdevs": 4, 00:12:36.970 "num_base_bdevs_discovered": 3, 00:12:36.970 "num_base_bdevs_operational": 4, 00:12:36.970 "base_bdevs_list": [ 00:12:36.970 { 00:12:36.970 "name": "BaseBdev1", 00:12:36.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.970 "is_configured": false, 00:12:36.970 "data_offset": 0, 00:12:36.970 "data_size": 0 00:12:36.970 }, 00:12:36.970 { 00:12:36.970 "name": "BaseBdev2", 00:12:36.970 "uuid": "13027b74-2019-4f39-9102-e6e2de565d2a", 00:12:36.970 "is_configured": true, 00:12:36.970 "data_offset": 0, 00:12:36.970 "data_size": 65536 00:12:36.970 }, 00:12:36.970 { 00:12:36.970 "name": "BaseBdev3", 00:12:36.970 "uuid": "57eb1b2b-0639-4853-9b43-4d841e1fb7c0", 00:12:36.970 "is_configured": true, 00:12:36.970 "data_offset": 0, 00:12:36.970 "data_size": 65536 00:12:36.970 }, 00:12:36.970 { 00:12:36.970 "name": "BaseBdev4", 00:12:36.970 "uuid": "2ad522db-eafe-4262-85af-3c37f58f0d6a", 00:12:36.970 "is_configured": true, 00:12:36.970 "data_offset": 0, 00:12:36.970 "data_size": 65536 00:12:36.970 } 00:12:36.970 ] 00:12:36.970 }' 00:12:36.970 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.970 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.535 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:37.535 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.535 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.535 [2024-12-06 04:03:30.636994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:37.535 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.535 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:37.535 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.535 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.535 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:37.535 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.535 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.535 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.535 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.535 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.535 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.535 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.535 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.535 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.535 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.535 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.535 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.535 "name": "Existed_Raid", 00:12:37.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.535 "strip_size_kb": 64, 00:12:37.535 "state": "configuring", 00:12:37.535 "raid_level": "concat", 00:12:37.535 "superblock": false, 00:12:37.535 "num_base_bdevs": 4, 00:12:37.535 "num_base_bdevs_discovered": 2, 00:12:37.535 "num_base_bdevs_operational": 4, 00:12:37.535 "base_bdevs_list": [ 00:12:37.535 { 00:12:37.535 "name": "BaseBdev1", 00:12:37.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.535 "is_configured": false, 00:12:37.535 "data_offset": 0, 00:12:37.535 "data_size": 0 00:12:37.535 }, 00:12:37.535 { 00:12:37.535 "name": null, 00:12:37.535 "uuid": "13027b74-2019-4f39-9102-e6e2de565d2a", 00:12:37.535 "is_configured": false, 00:12:37.535 "data_offset": 0, 00:12:37.535 "data_size": 65536 00:12:37.535 }, 00:12:37.535 { 00:12:37.535 "name": "BaseBdev3", 00:12:37.535 "uuid": "57eb1b2b-0639-4853-9b43-4d841e1fb7c0", 00:12:37.535 "is_configured": true, 00:12:37.535 "data_offset": 0, 00:12:37.535 "data_size": 65536 00:12:37.535 }, 00:12:37.535 { 00:12:37.535 "name": "BaseBdev4", 00:12:37.535 "uuid": "2ad522db-eafe-4262-85af-3c37f58f0d6a", 00:12:37.535 "is_configured": true, 00:12:37.535 "data_offset": 0, 00:12:37.535 "data_size": 65536 00:12:37.535 } 00:12:37.535 ] 00:12:37.535 }' 00:12:37.535 04:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.535 04:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.794 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:37.794 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.794 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.794 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.794 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.794 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:37.794 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:37.794 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.794 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.054 [2024-12-06 04:03:31.160477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.054 BaseBdev1 00:12:38.054 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.054 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:38.054 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:38.054 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:38.054 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:38.054 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:38.054 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:38.054 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:38.054 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.054 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.054 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.054 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:38.054 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.054 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.054 [ 00:12:38.054 { 00:12:38.054 "name": "BaseBdev1", 00:12:38.054 "aliases": [ 00:12:38.054 "999f7003-3c1f-462b-a320-06aa5dc418c9" 00:12:38.054 ], 00:12:38.054 "product_name": "Malloc disk", 00:12:38.054 "block_size": 512, 00:12:38.054 "num_blocks": 65536, 00:12:38.054 "uuid": "999f7003-3c1f-462b-a320-06aa5dc418c9", 00:12:38.054 "assigned_rate_limits": { 00:12:38.054 "rw_ios_per_sec": 0, 00:12:38.054 "rw_mbytes_per_sec": 0, 00:12:38.054 "r_mbytes_per_sec": 0, 00:12:38.054 "w_mbytes_per_sec": 0 00:12:38.054 }, 00:12:38.054 "claimed": true, 00:12:38.054 "claim_type": "exclusive_write", 00:12:38.054 "zoned": false, 00:12:38.054 "supported_io_types": { 00:12:38.054 "read": true, 00:12:38.054 "write": true, 00:12:38.054 "unmap": true, 00:12:38.054 "flush": true, 00:12:38.054 "reset": true, 00:12:38.054 "nvme_admin": false, 00:12:38.054 "nvme_io": false, 00:12:38.054 "nvme_io_md": false, 00:12:38.054 "write_zeroes": true, 00:12:38.054 "zcopy": true, 00:12:38.054 "get_zone_info": false, 00:12:38.054 "zone_management": false, 00:12:38.054 "zone_append": false, 00:12:38.054 "compare": false, 00:12:38.054 "compare_and_write": false, 00:12:38.054 "abort": true, 00:12:38.055 "seek_hole": false, 00:12:38.055 "seek_data": false, 00:12:38.055 "copy": true, 00:12:38.055 "nvme_iov_md": false 00:12:38.055 }, 00:12:38.055 "memory_domains": [ 00:12:38.055 { 00:12:38.055 "dma_device_id": "system", 00:12:38.055 "dma_device_type": 1 00:12:38.055 }, 00:12:38.055 { 00:12:38.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.055 "dma_device_type": 2 00:12:38.055 } 00:12:38.055 ], 00:12:38.055 "driver_specific": {} 00:12:38.055 } 00:12:38.055 ] 00:12:38.055 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.055 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:38.055 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:38.055 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.055 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.055 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:38.055 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.055 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.055 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.055 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.055 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.055 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.055 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.055 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.055 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.055 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.055 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.055 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.055 "name": "Existed_Raid", 00:12:38.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.055 "strip_size_kb": 64, 00:12:38.055 "state": "configuring", 00:12:38.055 "raid_level": "concat", 00:12:38.055 "superblock": false, 00:12:38.055 "num_base_bdevs": 4, 00:12:38.055 "num_base_bdevs_discovered": 3, 00:12:38.055 "num_base_bdevs_operational": 4, 00:12:38.055 "base_bdevs_list": [ 00:12:38.055 { 00:12:38.055 "name": "BaseBdev1", 00:12:38.055 "uuid": "999f7003-3c1f-462b-a320-06aa5dc418c9", 00:12:38.055 "is_configured": true, 00:12:38.055 "data_offset": 0, 00:12:38.055 "data_size": 65536 00:12:38.055 }, 00:12:38.055 { 00:12:38.055 "name": null, 00:12:38.055 "uuid": "13027b74-2019-4f39-9102-e6e2de565d2a", 00:12:38.055 "is_configured": false, 00:12:38.055 "data_offset": 0, 00:12:38.055 "data_size": 65536 00:12:38.055 }, 00:12:38.055 { 00:12:38.055 "name": "BaseBdev3", 00:12:38.055 "uuid": "57eb1b2b-0639-4853-9b43-4d841e1fb7c0", 00:12:38.055 "is_configured": true, 00:12:38.055 "data_offset": 0, 00:12:38.055 "data_size": 65536 00:12:38.055 }, 00:12:38.055 { 00:12:38.055 "name": "BaseBdev4", 00:12:38.055 "uuid": "2ad522db-eafe-4262-85af-3c37f58f0d6a", 00:12:38.055 "is_configured": true, 00:12:38.055 "data_offset": 0, 00:12:38.055 "data_size": 65536 00:12:38.055 } 00:12:38.055 ] 00:12:38.055 }' 00:12:38.055 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.055 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.314 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.314 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.314 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.314 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:38.314 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.314 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:38.314 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:38.314 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.314 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.314 [2024-12-06 04:03:31.659868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:38.314 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.314 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:38.314 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.314 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.314 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:38.314 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.314 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.314 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.314 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.314 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.314 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.573 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.573 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.573 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.573 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.573 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.573 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.573 "name": "Existed_Raid", 00:12:38.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.573 "strip_size_kb": 64, 00:12:38.573 "state": "configuring", 00:12:38.573 "raid_level": "concat", 00:12:38.573 "superblock": false, 00:12:38.573 "num_base_bdevs": 4, 00:12:38.574 "num_base_bdevs_discovered": 2, 00:12:38.574 "num_base_bdevs_operational": 4, 00:12:38.574 "base_bdevs_list": [ 00:12:38.574 { 00:12:38.574 "name": "BaseBdev1", 00:12:38.574 "uuid": "999f7003-3c1f-462b-a320-06aa5dc418c9", 00:12:38.574 "is_configured": true, 00:12:38.574 "data_offset": 0, 00:12:38.574 "data_size": 65536 00:12:38.574 }, 00:12:38.574 { 00:12:38.574 "name": null, 00:12:38.574 "uuid": "13027b74-2019-4f39-9102-e6e2de565d2a", 00:12:38.574 "is_configured": false, 00:12:38.574 "data_offset": 0, 00:12:38.574 "data_size": 65536 00:12:38.574 }, 00:12:38.574 { 00:12:38.574 "name": null, 00:12:38.574 "uuid": "57eb1b2b-0639-4853-9b43-4d841e1fb7c0", 00:12:38.574 "is_configured": false, 00:12:38.574 "data_offset": 0, 00:12:38.574 "data_size": 65536 00:12:38.574 }, 00:12:38.574 { 00:12:38.574 "name": "BaseBdev4", 00:12:38.574 "uuid": "2ad522db-eafe-4262-85af-3c37f58f0d6a", 00:12:38.574 "is_configured": true, 00:12:38.574 "data_offset": 0, 00:12:38.574 "data_size": 65536 00:12:38.574 } 00:12:38.574 ] 00:12:38.574 }' 00:12:38.574 04:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.574 04:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.832 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:38.832 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.832 04:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.832 04:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.832 04:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.832 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:38.832 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:38.832 04:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.832 04:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.832 [2024-12-06 04:03:32.155068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:38.832 04:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.833 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:38.833 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.833 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.833 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:38.833 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.833 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.833 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.833 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.833 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.833 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.833 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.833 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.833 04:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.833 04:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.091 04:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.091 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.091 "name": "Existed_Raid", 00:12:39.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.092 "strip_size_kb": 64, 00:12:39.092 "state": "configuring", 00:12:39.092 "raid_level": "concat", 00:12:39.092 "superblock": false, 00:12:39.092 "num_base_bdevs": 4, 00:12:39.092 "num_base_bdevs_discovered": 3, 00:12:39.092 "num_base_bdevs_operational": 4, 00:12:39.092 "base_bdevs_list": [ 00:12:39.092 { 00:12:39.092 "name": "BaseBdev1", 00:12:39.092 "uuid": "999f7003-3c1f-462b-a320-06aa5dc418c9", 00:12:39.092 "is_configured": true, 00:12:39.092 "data_offset": 0, 00:12:39.092 "data_size": 65536 00:12:39.092 }, 00:12:39.092 { 00:12:39.092 "name": null, 00:12:39.092 "uuid": "13027b74-2019-4f39-9102-e6e2de565d2a", 00:12:39.092 "is_configured": false, 00:12:39.092 "data_offset": 0, 00:12:39.092 "data_size": 65536 00:12:39.092 }, 00:12:39.092 { 00:12:39.092 "name": "BaseBdev3", 00:12:39.092 "uuid": "57eb1b2b-0639-4853-9b43-4d841e1fb7c0", 00:12:39.092 "is_configured": true, 00:12:39.092 "data_offset": 0, 00:12:39.092 "data_size": 65536 00:12:39.092 }, 00:12:39.092 { 00:12:39.092 "name": "BaseBdev4", 00:12:39.092 "uuid": "2ad522db-eafe-4262-85af-3c37f58f0d6a", 00:12:39.092 "is_configured": true, 00:12:39.092 "data_offset": 0, 00:12:39.092 "data_size": 65536 00:12:39.092 } 00:12:39.092 ] 00:12:39.092 }' 00:12:39.092 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.092 04:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.351 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.351 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:39.351 04:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.351 04:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.351 04:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.351 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:39.351 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:39.351 04:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.351 04:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.351 [2024-12-06 04:03:32.658234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:39.610 04:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.610 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:39.610 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.610 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.610 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:39.610 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.610 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.610 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.610 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.610 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.610 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.610 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.610 04:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.610 04:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.610 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.610 04:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.610 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.610 "name": "Existed_Raid", 00:12:39.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.610 "strip_size_kb": 64, 00:12:39.610 "state": "configuring", 00:12:39.610 "raid_level": "concat", 00:12:39.610 "superblock": false, 00:12:39.610 "num_base_bdevs": 4, 00:12:39.610 "num_base_bdevs_discovered": 2, 00:12:39.610 "num_base_bdevs_operational": 4, 00:12:39.610 "base_bdevs_list": [ 00:12:39.610 { 00:12:39.610 "name": null, 00:12:39.610 "uuid": "999f7003-3c1f-462b-a320-06aa5dc418c9", 00:12:39.610 "is_configured": false, 00:12:39.610 "data_offset": 0, 00:12:39.610 "data_size": 65536 00:12:39.610 }, 00:12:39.610 { 00:12:39.610 "name": null, 00:12:39.610 "uuid": "13027b74-2019-4f39-9102-e6e2de565d2a", 00:12:39.610 "is_configured": false, 00:12:39.610 "data_offset": 0, 00:12:39.610 "data_size": 65536 00:12:39.610 }, 00:12:39.610 { 00:12:39.610 "name": "BaseBdev3", 00:12:39.610 "uuid": "57eb1b2b-0639-4853-9b43-4d841e1fb7c0", 00:12:39.610 "is_configured": true, 00:12:39.610 "data_offset": 0, 00:12:39.610 "data_size": 65536 00:12:39.610 }, 00:12:39.610 { 00:12:39.610 "name": "BaseBdev4", 00:12:39.610 "uuid": "2ad522db-eafe-4262-85af-3c37f58f0d6a", 00:12:39.610 "is_configured": true, 00:12:39.610 "data_offset": 0, 00:12:39.610 "data_size": 65536 00:12:39.610 } 00:12:39.610 ] 00:12:39.610 }' 00:12:39.610 04:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.610 04:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.869 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.869 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.869 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.869 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:40.128 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.128 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:40.128 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:40.128 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.128 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.128 [2024-12-06 04:03:33.253209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:40.128 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.128 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:40.128 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.128 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.128 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:40.128 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.128 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.128 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.128 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.128 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.128 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.128 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.128 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.128 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.129 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.129 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.129 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.129 "name": "Existed_Raid", 00:12:40.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.129 "strip_size_kb": 64, 00:12:40.129 "state": "configuring", 00:12:40.129 "raid_level": "concat", 00:12:40.129 "superblock": false, 00:12:40.129 "num_base_bdevs": 4, 00:12:40.129 "num_base_bdevs_discovered": 3, 00:12:40.129 "num_base_bdevs_operational": 4, 00:12:40.129 "base_bdevs_list": [ 00:12:40.129 { 00:12:40.129 "name": null, 00:12:40.129 "uuid": "999f7003-3c1f-462b-a320-06aa5dc418c9", 00:12:40.129 "is_configured": false, 00:12:40.129 "data_offset": 0, 00:12:40.129 "data_size": 65536 00:12:40.129 }, 00:12:40.129 { 00:12:40.129 "name": "BaseBdev2", 00:12:40.129 "uuid": "13027b74-2019-4f39-9102-e6e2de565d2a", 00:12:40.129 "is_configured": true, 00:12:40.129 "data_offset": 0, 00:12:40.129 "data_size": 65536 00:12:40.129 }, 00:12:40.129 { 00:12:40.129 "name": "BaseBdev3", 00:12:40.129 "uuid": "57eb1b2b-0639-4853-9b43-4d841e1fb7c0", 00:12:40.129 "is_configured": true, 00:12:40.129 "data_offset": 0, 00:12:40.129 "data_size": 65536 00:12:40.129 }, 00:12:40.129 { 00:12:40.129 "name": "BaseBdev4", 00:12:40.129 "uuid": "2ad522db-eafe-4262-85af-3c37f58f0d6a", 00:12:40.129 "is_configured": true, 00:12:40.129 "data_offset": 0, 00:12:40.129 "data_size": 65536 00:12:40.129 } 00:12:40.129 ] 00:12:40.129 }' 00:12:40.129 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.129 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.429 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.429 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:40.429 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.429 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.429 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.429 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:40.429 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:40.429 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.429 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.429 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.429 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.429 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 999f7003-3c1f-462b-a320-06aa5dc418c9 00:12:40.429 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.429 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.689 [2024-12-06 04:03:33.818344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:40.689 [2024-12-06 04:03:33.818394] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:40.689 [2024-12-06 04:03:33.818402] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:40.689 [2024-12-06 04:03:33.818680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:40.689 [2024-12-06 04:03:33.818841] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:40.689 [2024-12-06 04:03:33.818853] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:40.689 [2024-12-06 04:03:33.819167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.690 NewBaseBdev 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.690 [ 00:12:40.690 { 00:12:40.690 "name": "NewBaseBdev", 00:12:40.690 "aliases": [ 00:12:40.690 "999f7003-3c1f-462b-a320-06aa5dc418c9" 00:12:40.690 ], 00:12:40.690 "product_name": "Malloc disk", 00:12:40.690 "block_size": 512, 00:12:40.690 "num_blocks": 65536, 00:12:40.690 "uuid": "999f7003-3c1f-462b-a320-06aa5dc418c9", 00:12:40.690 "assigned_rate_limits": { 00:12:40.690 "rw_ios_per_sec": 0, 00:12:40.690 "rw_mbytes_per_sec": 0, 00:12:40.690 "r_mbytes_per_sec": 0, 00:12:40.690 "w_mbytes_per_sec": 0 00:12:40.690 }, 00:12:40.690 "claimed": true, 00:12:40.690 "claim_type": "exclusive_write", 00:12:40.690 "zoned": false, 00:12:40.690 "supported_io_types": { 00:12:40.690 "read": true, 00:12:40.690 "write": true, 00:12:40.690 "unmap": true, 00:12:40.690 "flush": true, 00:12:40.690 "reset": true, 00:12:40.690 "nvme_admin": false, 00:12:40.690 "nvme_io": false, 00:12:40.690 "nvme_io_md": false, 00:12:40.690 "write_zeroes": true, 00:12:40.690 "zcopy": true, 00:12:40.690 "get_zone_info": false, 00:12:40.690 "zone_management": false, 00:12:40.690 "zone_append": false, 00:12:40.690 "compare": false, 00:12:40.690 "compare_and_write": false, 00:12:40.690 "abort": true, 00:12:40.690 "seek_hole": false, 00:12:40.690 "seek_data": false, 00:12:40.690 "copy": true, 00:12:40.690 "nvme_iov_md": false 00:12:40.690 }, 00:12:40.690 "memory_domains": [ 00:12:40.690 { 00:12:40.690 "dma_device_id": "system", 00:12:40.690 "dma_device_type": 1 00:12:40.690 }, 00:12:40.690 { 00:12:40.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.690 "dma_device_type": 2 00:12:40.690 } 00:12:40.690 ], 00:12:40.690 "driver_specific": {} 00:12:40.690 } 00:12:40.690 ] 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.690 "name": "Existed_Raid", 00:12:40.690 "uuid": "b1b53f07-d25e-40f2-a50a-590e3224424e", 00:12:40.690 "strip_size_kb": 64, 00:12:40.690 "state": "online", 00:12:40.690 "raid_level": "concat", 00:12:40.690 "superblock": false, 00:12:40.690 "num_base_bdevs": 4, 00:12:40.690 "num_base_bdevs_discovered": 4, 00:12:40.690 "num_base_bdevs_operational": 4, 00:12:40.690 "base_bdevs_list": [ 00:12:40.690 { 00:12:40.690 "name": "NewBaseBdev", 00:12:40.690 "uuid": "999f7003-3c1f-462b-a320-06aa5dc418c9", 00:12:40.690 "is_configured": true, 00:12:40.690 "data_offset": 0, 00:12:40.690 "data_size": 65536 00:12:40.690 }, 00:12:40.690 { 00:12:40.690 "name": "BaseBdev2", 00:12:40.690 "uuid": "13027b74-2019-4f39-9102-e6e2de565d2a", 00:12:40.690 "is_configured": true, 00:12:40.690 "data_offset": 0, 00:12:40.690 "data_size": 65536 00:12:40.690 }, 00:12:40.690 { 00:12:40.690 "name": "BaseBdev3", 00:12:40.690 "uuid": "57eb1b2b-0639-4853-9b43-4d841e1fb7c0", 00:12:40.690 "is_configured": true, 00:12:40.690 "data_offset": 0, 00:12:40.690 "data_size": 65536 00:12:40.690 }, 00:12:40.690 { 00:12:40.690 "name": "BaseBdev4", 00:12:40.690 "uuid": "2ad522db-eafe-4262-85af-3c37f58f0d6a", 00:12:40.690 "is_configured": true, 00:12:40.690 "data_offset": 0, 00:12:40.690 "data_size": 65536 00:12:40.690 } 00:12:40.690 ] 00:12:40.690 }' 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.690 04:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.273 [2024-12-06 04:03:34.353951] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:41.273 "name": "Existed_Raid", 00:12:41.273 "aliases": [ 00:12:41.273 "b1b53f07-d25e-40f2-a50a-590e3224424e" 00:12:41.273 ], 00:12:41.273 "product_name": "Raid Volume", 00:12:41.273 "block_size": 512, 00:12:41.273 "num_blocks": 262144, 00:12:41.273 "uuid": "b1b53f07-d25e-40f2-a50a-590e3224424e", 00:12:41.273 "assigned_rate_limits": { 00:12:41.273 "rw_ios_per_sec": 0, 00:12:41.273 "rw_mbytes_per_sec": 0, 00:12:41.273 "r_mbytes_per_sec": 0, 00:12:41.273 "w_mbytes_per_sec": 0 00:12:41.273 }, 00:12:41.273 "claimed": false, 00:12:41.273 "zoned": false, 00:12:41.273 "supported_io_types": { 00:12:41.273 "read": true, 00:12:41.273 "write": true, 00:12:41.273 "unmap": true, 00:12:41.273 "flush": true, 00:12:41.273 "reset": true, 00:12:41.273 "nvme_admin": false, 00:12:41.273 "nvme_io": false, 00:12:41.273 "nvme_io_md": false, 00:12:41.273 "write_zeroes": true, 00:12:41.273 "zcopy": false, 00:12:41.273 "get_zone_info": false, 00:12:41.273 "zone_management": false, 00:12:41.273 "zone_append": false, 00:12:41.273 "compare": false, 00:12:41.273 "compare_and_write": false, 00:12:41.273 "abort": false, 00:12:41.273 "seek_hole": false, 00:12:41.273 "seek_data": false, 00:12:41.273 "copy": false, 00:12:41.273 "nvme_iov_md": false 00:12:41.273 }, 00:12:41.273 "memory_domains": [ 00:12:41.273 { 00:12:41.273 "dma_device_id": "system", 00:12:41.273 "dma_device_type": 1 00:12:41.273 }, 00:12:41.273 { 00:12:41.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.273 "dma_device_type": 2 00:12:41.273 }, 00:12:41.273 { 00:12:41.273 "dma_device_id": "system", 00:12:41.273 "dma_device_type": 1 00:12:41.273 }, 00:12:41.273 { 00:12:41.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.273 "dma_device_type": 2 00:12:41.273 }, 00:12:41.273 { 00:12:41.273 "dma_device_id": "system", 00:12:41.273 "dma_device_type": 1 00:12:41.273 }, 00:12:41.273 { 00:12:41.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.273 "dma_device_type": 2 00:12:41.273 }, 00:12:41.273 { 00:12:41.273 "dma_device_id": "system", 00:12:41.273 "dma_device_type": 1 00:12:41.273 }, 00:12:41.273 { 00:12:41.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.273 "dma_device_type": 2 00:12:41.273 } 00:12:41.273 ], 00:12:41.273 "driver_specific": { 00:12:41.273 "raid": { 00:12:41.273 "uuid": "b1b53f07-d25e-40f2-a50a-590e3224424e", 00:12:41.273 "strip_size_kb": 64, 00:12:41.273 "state": "online", 00:12:41.273 "raid_level": "concat", 00:12:41.273 "superblock": false, 00:12:41.273 "num_base_bdevs": 4, 00:12:41.273 "num_base_bdevs_discovered": 4, 00:12:41.273 "num_base_bdevs_operational": 4, 00:12:41.273 "base_bdevs_list": [ 00:12:41.273 { 00:12:41.273 "name": "NewBaseBdev", 00:12:41.273 "uuid": "999f7003-3c1f-462b-a320-06aa5dc418c9", 00:12:41.273 "is_configured": true, 00:12:41.273 "data_offset": 0, 00:12:41.273 "data_size": 65536 00:12:41.273 }, 00:12:41.273 { 00:12:41.273 "name": "BaseBdev2", 00:12:41.273 "uuid": "13027b74-2019-4f39-9102-e6e2de565d2a", 00:12:41.273 "is_configured": true, 00:12:41.273 "data_offset": 0, 00:12:41.273 "data_size": 65536 00:12:41.273 }, 00:12:41.273 { 00:12:41.273 "name": "BaseBdev3", 00:12:41.273 "uuid": "57eb1b2b-0639-4853-9b43-4d841e1fb7c0", 00:12:41.273 "is_configured": true, 00:12:41.273 "data_offset": 0, 00:12:41.273 "data_size": 65536 00:12:41.273 }, 00:12:41.273 { 00:12:41.273 "name": "BaseBdev4", 00:12:41.273 "uuid": "2ad522db-eafe-4262-85af-3c37f58f0d6a", 00:12:41.273 "is_configured": true, 00:12:41.273 "data_offset": 0, 00:12:41.273 "data_size": 65536 00:12:41.273 } 00:12:41.273 ] 00:12:41.273 } 00:12:41.273 } 00:12:41.273 }' 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:41.273 BaseBdev2 00:12:41.273 BaseBdev3 00:12:41.273 BaseBdev4' 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:41.273 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:41.274 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:41.274 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:41.274 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.274 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.274 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.533 [2024-12-06 04:03:34.700980] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:41.533 [2024-12-06 04:03:34.701020] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:41.533 [2024-12-06 04:03:34.701126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.533 [2024-12-06 04:03:34.701210] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:41.533 [2024-12-06 04:03:34.701227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71377 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71377 ']' 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71377 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71377 00:12:41.533 killing process with pid 71377 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71377' 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71377 00:12:41.533 [2024-12-06 04:03:34.750256] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:41.533 04:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71377 00:12:42.102 [2024-12-06 04:03:35.225669] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:43.479 00:12:43.479 real 0m12.228s 00:12:43.479 user 0m19.324s 00:12:43.479 sys 0m1.864s 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.479 ************************************ 00:12:43.479 END TEST raid_state_function_test 00:12:43.479 ************************************ 00:12:43.479 04:03:36 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:12:43.479 04:03:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:43.479 04:03:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.479 04:03:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:43.479 ************************************ 00:12:43.479 START TEST raid_state_function_test_sb 00:12:43.479 ************************************ 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72057 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:43.479 Process raid pid: 72057 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72057' 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72057 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72057 ']' 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:43.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:43.479 04:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.479 [2024-12-06 04:03:36.715737] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:12:43.479 [2024-12-06 04:03:36.715911] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.738 [2024-12-06 04:03:36.887430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.738 [2024-12-06 04:03:37.022794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.998 [2024-12-06 04:03:37.265284] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.998 [2024-12-06 04:03:37.265324] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.568 04:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.568 04:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:44.568 04:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:44.568 04:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.568 04:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.568 [2024-12-06 04:03:37.675033] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:44.568 [2024-12-06 04:03:37.675110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:44.568 [2024-12-06 04:03:37.675123] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:44.568 [2024-12-06 04:03:37.675134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:44.568 [2024-12-06 04:03:37.675148] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:44.568 [2024-12-06 04:03:37.675159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:44.568 [2024-12-06 04:03:37.675167] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:44.568 [2024-12-06 04:03:37.675177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:44.568 04:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.568 04:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:44.568 04:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.568 04:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.568 04:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:44.568 04:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.568 04:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.568 04:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.568 04:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.568 04:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.568 04:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.568 04:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.568 04:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.568 04:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.568 04:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.568 04:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.568 04:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.568 "name": "Existed_Raid", 00:12:44.568 "uuid": "ab31e5a1-ada2-4c60-b051-b7011b44c33e", 00:12:44.568 "strip_size_kb": 64, 00:12:44.568 "state": "configuring", 00:12:44.568 "raid_level": "concat", 00:12:44.568 "superblock": true, 00:12:44.568 "num_base_bdevs": 4, 00:12:44.568 "num_base_bdevs_discovered": 0, 00:12:44.568 "num_base_bdevs_operational": 4, 00:12:44.568 "base_bdevs_list": [ 00:12:44.568 { 00:12:44.568 "name": "BaseBdev1", 00:12:44.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.568 "is_configured": false, 00:12:44.568 "data_offset": 0, 00:12:44.568 "data_size": 0 00:12:44.568 }, 00:12:44.568 { 00:12:44.568 "name": "BaseBdev2", 00:12:44.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.568 "is_configured": false, 00:12:44.568 "data_offset": 0, 00:12:44.568 "data_size": 0 00:12:44.568 }, 00:12:44.568 { 00:12:44.568 "name": "BaseBdev3", 00:12:44.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.568 "is_configured": false, 00:12:44.568 "data_offset": 0, 00:12:44.568 "data_size": 0 00:12:44.568 }, 00:12:44.568 { 00:12:44.568 "name": "BaseBdev4", 00:12:44.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.568 "is_configured": false, 00:12:44.568 "data_offset": 0, 00:12:44.568 "data_size": 0 00:12:44.568 } 00:12:44.568 ] 00:12:44.568 }' 00:12:44.568 04:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.569 04:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.828 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:44.828 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.828 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.828 [2024-12-06 04:03:38.134191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:44.828 [2024-12-06 04:03:38.134241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:44.828 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.828 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:44.828 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.829 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.829 [2024-12-06 04:03:38.142193] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:44.829 [2024-12-06 04:03:38.142238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:44.829 [2024-12-06 04:03:38.142249] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:44.829 [2024-12-06 04:03:38.142260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:44.829 [2024-12-06 04:03:38.142268] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:44.829 [2024-12-06 04:03:38.142278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:44.829 [2024-12-06 04:03:38.142285] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:44.829 [2024-12-06 04:03:38.142296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:44.829 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.829 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:44.829 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.829 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.089 [2024-12-06 04:03:38.189450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.089 BaseBdev1 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.089 [ 00:12:45.089 { 00:12:45.089 "name": "BaseBdev1", 00:12:45.089 "aliases": [ 00:12:45.089 "30d13c63-dc71-498f-a293-4e6c813cb02f" 00:12:45.089 ], 00:12:45.089 "product_name": "Malloc disk", 00:12:45.089 "block_size": 512, 00:12:45.089 "num_blocks": 65536, 00:12:45.089 "uuid": "30d13c63-dc71-498f-a293-4e6c813cb02f", 00:12:45.089 "assigned_rate_limits": { 00:12:45.089 "rw_ios_per_sec": 0, 00:12:45.089 "rw_mbytes_per_sec": 0, 00:12:45.089 "r_mbytes_per_sec": 0, 00:12:45.089 "w_mbytes_per_sec": 0 00:12:45.089 }, 00:12:45.089 "claimed": true, 00:12:45.089 "claim_type": "exclusive_write", 00:12:45.089 "zoned": false, 00:12:45.089 "supported_io_types": { 00:12:45.089 "read": true, 00:12:45.089 "write": true, 00:12:45.089 "unmap": true, 00:12:45.089 "flush": true, 00:12:45.089 "reset": true, 00:12:45.089 "nvme_admin": false, 00:12:45.089 "nvme_io": false, 00:12:45.089 "nvme_io_md": false, 00:12:45.089 "write_zeroes": true, 00:12:45.089 "zcopy": true, 00:12:45.089 "get_zone_info": false, 00:12:45.089 "zone_management": false, 00:12:45.089 "zone_append": false, 00:12:45.089 "compare": false, 00:12:45.089 "compare_and_write": false, 00:12:45.089 "abort": true, 00:12:45.089 "seek_hole": false, 00:12:45.089 "seek_data": false, 00:12:45.089 "copy": true, 00:12:45.089 "nvme_iov_md": false 00:12:45.089 }, 00:12:45.089 "memory_domains": [ 00:12:45.089 { 00:12:45.089 "dma_device_id": "system", 00:12:45.089 "dma_device_type": 1 00:12:45.089 }, 00:12:45.089 { 00:12:45.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.089 "dma_device_type": 2 00:12:45.089 } 00:12:45.089 ], 00:12:45.089 "driver_specific": {} 00:12:45.089 } 00:12:45.089 ] 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.089 "name": "Existed_Raid", 00:12:45.089 "uuid": "d26e76a1-2a4b-44cf-9010-20a0a80b864a", 00:12:45.089 "strip_size_kb": 64, 00:12:45.089 "state": "configuring", 00:12:45.089 "raid_level": "concat", 00:12:45.089 "superblock": true, 00:12:45.089 "num_base_bdevs": 4, 00:12:45.089 "num_base_bdevs_discovered": 1, 00:12:45.089 "num_base_bdevs_operational": 4, 00:12:45.089 "base_bdevs_list": [ 00:12:45.089 { 00:12:45.089 "name": "BaseBdev1", 00:12:45.089 "uuid": "30d13c63-dc71-498f-a293-4e6c813cb02f", 00:12:45.089 "is_configured": true, 00:12:45.089 "data_offset": 2048, 00:12:45.089 "data_size": 63488 00:12:45.089 }, 00:12:45.089 { 00:12:45.089 "name": "BaseBdev2", 00:12:45.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.089 "is_configured": false, 00:12:45.089 "data_offset": 0, 00:12:45.089 "data_size": 0 00:12:45.089 }, 00:12:45.089 { 00:12:45.089 "name": "BaseBdev3", 00:12:45.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.089 "is_configured": false, 00:12:45.089 "data_offset": 0, 00:12:45.089 "data_size": 0 00:12:45.089 }, 00:12:45.089 { 00:12:45.089 "name": "BaseBdev4", 00:12:45.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.089 "is_configured": false, 00:12:45.089 "data_offset": 0, 00:12:45.089 "data_size": 0 00:12:45.089 } 00:12:45.089 ] 00:12:45.089 }' 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.089 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.658 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:45.658 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.658 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.658 [2024-12-06 04:03:38.724802] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:45.658 [2024-12-06 04:03:38.724868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:45.658 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.658 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:45.658 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.658 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.658 [2024-12-06 04:03:38.732859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.658 [2024-12-06 04:03:38.734947] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:45.658 [2024-12-06 04:03:38.734997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:45.658 [2024-12-06 04:03:38.735008] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:45.658 [2024-12-06 04:03:38.735021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:45.658 [2024-12-06 04:03:38.735030] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:45.658 [2024-12-06 04:03:38.735039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:45.658 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.658 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:45.658 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:45.658 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:45.658 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.658 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.658 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:45.658 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.658 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.658 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.658 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.658 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.658 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.658 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.658 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.658 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.659 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.659 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.659 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.659 "name": "Existed_Raid", 00:12:45.659 "uuid": "2926de00-d04e-428d-9ea2-a7ff4c99650a", 00:12:45.659 "strip_size_kb": 64, 00:12:45.659 "state": "configuring", 00:12:45.659 "raid_level": "concat", 00:12:45.659 "superblock": true, 00:12:45.659 "num_base_bdevs": 4, 00:12:45.659 "num_base_bdevs_discovered": 1, 00:12:45.659 "num_base_bdevs_operational": 4, 00:12:45.659 "base_bdevs_list": [ 00:12:45.659 { 00:12:45.659 "name": "BaseBdev1", 00:12:45.659 "uuid": "30d13c63-dc71-498f-a293-4e6c813cb02f", 00:12:45.659 "is_configured": true, 00:12:45.659 "data_offset": 2048, 00:12:45.659 "data_size": 63488 00:12:45.659 }, 00:12:45.659 { 00:12:45.659 "name": "BaseBdev2", 00:12:45.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.659 "is_configured": false, 00:12:45.659 "data_offset": 0, 00:12:45.659 "data_size": 0 00:12:45.659 }, 00:12:45.659 { 00:12:45.659 "name": "BaseBdev3", 00:12:45.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.659 "is_configured": false, 00:12:45.659 "data_offset": 0, 00:12:45.659 "data_size": 0 00:12:45.659 }, 00:12:45.659 { 00:12:45.659 "name": "BaseBdev4", 00:12:45.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.659 "is_configured": false, 00:12:45.659 "data_offset": 0, 00:12:45.659 "data_size": 0 00:12:45.659 } 00:12:45.659 ] 00:12:45.659 }' 00:12:45.659 04:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.659 04:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.917 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:45.917 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.917 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.917 [2024-12-06 04:03:39.260417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:45.917 BaseBdev2 00:12:45.917 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.917 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:45.917 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:45.917 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:45.917 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:45.917 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:45.917 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:45.917 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:45.917 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.917 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.177 [ 00:12:46.177 { 00:12:46.177 "name": "BaseBdev2", 00:12:46.177 "aliases": [ 00:12:46.177 "17521ceb-5702-465d-b56e-3105f1c8e3db" 00:12:46.177 ], 00:12:46.177 "product_name": "Malloc disk", 00:12:46.177 "block_size": 512, 00:12:46.177 "num_blocks": 65536, 00:12:46.177 "uuid": "17521ceb-5702-465d-b56e-3105f1c8e3db", 00:12:46.177 "assigned_rate_limits": { 00:12:46.177 "rw_ios_per_sec": 0, 00:12:46.177 "rw_mbytes_per_sec": 0, 00:12:46.177 "r_mbytes_per_sec": 0, 00:12:46.177 "w_mbytes_per_sec": 0 00:12:46.177 }, 00:12:46.177 "claimed": true, 00:12:46.177 "claim_type": "exclusive_write", 00:12:46.177 "zoned": false, 00:12:46.177 "supported_io_types": { 00:12:46.177 "read": true, 00:12:46.177 "write": true, 00:12:46.177 "unmap": true, 00:12:46.177 "flush": true, 00:12:46.177 "reset": true, 00:12:46.177 "nvme_admin": false, 00:12:46.177 "nvme_io": false, 00:12:46.177 "nvme_io_md": false, 00:12:46.177 "write_zeroes": true, 00:12:46.177 "zcopy": true, 00:12:46.177 "get_zone_info": false, 00:12:46.177 "zone_management": false, 00:12:46.177 "zone_append": false, 00:12:46.177 "compare": false, 00:12:46.177 "compare_and_write": false, 00:12:46.177 "abort": true, 00:12:46.177 "seek_hole": false, 00:12:46.177 "seek_data": false, 00:12:46.177 "copy": true, 00:12:46.177 "nvme_iov_md": false 00:12:46.177 }, 00:12:46.177 "memory_domains": [ 00:12:46.177 { 00:12:46.177 "dma_device_id": "system", 00:12:46.177 "dma_device_type": 1 00:12:46.177 }, 00:12:46.177 { 00:12:46.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.177 "dma_device_type": 2 00:12:46.177 } 00:12:46.177 ], 00:12:46.177 "driver_specific": {} 00:12:46.177 } 00:12:46.177 ] 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.177 "name": "Existed_Raid", 00:12:46.177 "uuid": "2926de00-d04e-428d-9ea2-a7ff4c99650a", 00:12:46.177 "strip_size_kb": 64, 00:12:46.177 "state": "configuring", 00:12:46.177 "raid_level": "concat", 00:12:46.177 "superblock": true, 00:12:46.177 "num_base_bdevs": 4, 00:12:46.177 "num_base_bdevs_discovered": 2, 00:12:46.177 "num_base_bdevs_operational": 4, 00:12:46.177 "base_bdevs_list": [ 00:12:46.177 { 00:12:46.177 "name": "BaseBdev1", 00:12:46.177 "uuid": "30d13c63-dc71-498f-a293-4e6c813cb02f", 00:12:46.177 "is_configured": true, 00:12:46.177 "data_offset": 2048, 00:12:46.177 "data_size": 63488 00:12:46.177 }, 00:12:46.177 { 00:12:46.177 "name": "BaseBdev2", 00:12:46.177 "uuid": "17521ceb-5702-465d-b56e-3105f1c8e3db", 00:12:46.177 "is_configured": true, 00:12:46.177 "data_offset": 2048, 00:12:46.177 "data_size": 63488 00:12:46.177 }, 00:12:46.177 { 00:12:46.177 "name": "BaseBdev3", 00:12:46.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.177 "is_configured": false, 00:12:46.177 "data_offset": 0, 00:12:46.177 "data_size": 0 00:12:46.177 }, 00:12:46.177 { 00:12:46.177 "name": "BaseBdev4", 00:12:46.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.177 "is_configured": false, 00:12:46.177 "data_offset": 0, 00:12:46.177 "data_size": 0 00:12:46.177 } 00:12:46.177 ] 00:12:46.177 }' 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.177 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.437 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:46.437 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.437 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.437 [2024-12-06 04:03:39.758284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:46.437 BaseBdev3 00:12:46.437 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.437 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:46.437 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:46.437 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:46.437 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:46.437 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:46.437 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:46.437 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:46.437 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.437 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.437 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.437 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:46.437 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.437 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.437 [ 00:12:46.437 { 00:12:46.437 "name": "BaseBdev3", 00:12:46.437 "aliases": [ 00:12:46.437 "be755fbe-a7c5-4606-b62d-0040039f2880" 00:12:46.437 ], 00:12:46.437 "product_name": "Malloc disk", 00:12:46.437 "block_size": 512, 00:12:46.437 "num_blocks": 65536, 00:12:46.437 "uuid": "be755fbe-a7c5-4606-b62d-0040039f2880", 00:12:46.437 "assigned_rate_limits": { 00:12:46.437 "rw_ios_per_sec": 0, 00:12:46.437 "rw_mbytes_per_sec": 0, 00:12:46.437 "r_mbytes_per_sec": 0, 00:12:46.437 "w_mbytes_per_sec": 0 00:12:46.437 }, 00:12:46.437 "claimed": true, 00:12:46.437 "claim_type": "exclusive_write", 00:12:46.437 "zoned": false, 00:12:46.437 "supported_io_types": { 00:12:46.437 "read": true, 00:12:46.437 "write": true, 00:12:46.437 "unmap": true, 00:12:46.437 "flush": true, 00:12:46.437 "reset": true, 00:12:46.437 "nvme_admin": false, 00:12:46.437 "nvme_io": false, 00:12:46.437 "nvme_io_md": false, 00:12:46.437 "write_zeroes": true, 00:12:46.437 "zcopy": true, 00:12:46.437 "get_zone_info": false, 00:12:46.437 "zone_management": false, 00:12:46.438 "zone_append": false, 00:12:46.438 "compare": false, 00:12:46.438 "compare_and_write": false, 00:12:46.438 "abort": true, 00:12:46.438 "seek_hole": false, 00:12:46.438 "seek_data": false, 00:12:46.438 "copy": true, 00:12:46.438 "nvme_iov_md": false 00:12:46.438 }, 00:12:46.698 "memory_domains": [ 00:12:46.698 { 00:12:46.698 "dma_device_id": "system", 00:12:46.698 "dma_device_type": 1 00:12:46.698 }, 00:12:46.698 { 00:12:46.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.698 "dma_device_type": 2 00:12:46.698 } 00:12:46.698 ], 00:12:46.698 "driver_specific": {} 00:12:46.698 } 00:12:46.698 ] 00:12:46.698 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.698 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:46.698 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:46.698 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:46.698 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:46.698 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.698 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.698 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:46.698 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.698 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.698 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.698 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.698 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.698 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.698 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.698 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.698 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.698 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.698 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.698 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.698 "name": "Existed_Raid", 00:12:46.698 "uuid": "2926de00-d04e-428d-9ea2-a7ff4c99650a", 00:12:46.698 "strip_size_kb": 64, 00:12:46.698 "state": "configuring", 00:12:46.698 "raid_level": "concat", 00:12:46.698 "superblock": true, 00:12:46.698 "num_base_bdevs": 4, 00:12:46.698 "num_base_bdevs_discovered": 3, 00:12:46.698 "num_base_bdevs_operational": 4, 00:12:46.698 "base_bdevs_list": [ 00:12:46.698 { 00:12:46.698 "name": "BaseBdev1", 00:12:46.698 "uuid": "30d13c63-dc71-498f-a293-4e6c813cb02f", 00:12:46.698 "is_configured": true, 00:12:46.698 "data_offset": 2048, 00:12:46.698 "data_size": 63488 00:12:46.698 }, 00:12:46.698 { 00:12:46.698 "name": "BaseBdev2", 00:12:46.698 "uuid": "17521ceb-5702-465d-b56e-3105f1c8e3db", 00:12:46.698 "is_configured": true, 00:12:46.698 "data_offset": 2048, 00:12:46.698 "data_size": 63488 00:12:46.698 }, 00:12:46.698 { 00:12:46.698 "name": "BaseBdev3", 00:12:46.698 "uuid": "be755fbe-a7c5-4606-b62d-0040039f2880", 00:12:46.698 "is_configured": true, 00:12:46.698 "data_offset": 2048, 00:12:46.698 "data_size": 63488 00:12:46.698 }, 00:12:46.698 { 00:12:46.698 "name": "BaseBdev4", 00:12:46.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.698 "is_configured": false, 00:12:46.698 "data_offset": 0, 00:12:46.698 "data_size": 0 00:12:46.698 } 00:12:46.698 ] 00:12:46.698 }' 00:12:46.698 04:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.698 04:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.958 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:46.958 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.958 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.958 [2024-12-06 04:03:40.274304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:46.958 [2024-12-06 04:03:40.274626] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:46.958 [2024-12-06 04:03:40.274649] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:46.958 [2024-12-06 04:03:40.274963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:46.958 [2024-12-06 04:03:40.275153] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:46.958 [2024-12-06 04:03:40.275174] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:46.958 BaseBdev4 00:12:46.958 [2024-12-06 04:03:40.275342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.958 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.958 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:46.958 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:46.958 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:46.958 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:46.958 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:46.958 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:46.958 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:46.958 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.958 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.958 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.958 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:46.958 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.958 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.958 [ 00:12:46.958 { 00:12:46.958 "name": "BaseBdev4", 00:12:46.958 "aliases": [ 00:12:46.958 "e77de800-a3ca-4d64-bf39-29c979078a79" 00:12:46.958 ], 00:12:46.958 "product_name": "Malloc disk", 00:12:46.958 "block_size": 512, 00:12:46.958 "num_blocks": 65536, 00:12:46.958 "uuid": "e77de800-a3ca-4d64-bf39-29c979078a79", 00:12:46.958 "assigned_rate_limits": { 00:12:46.958 "rw_ios_per_sec": 0, 00:12:46.958 "rw_mbytes_per_sec": 0, 00:12:46.958 "r_mbytes_per_sec": 0, 00:12:46.958 "w_mbytes_per_sec": 0 00:12:46.958 }, 00:12:46.958 "claimed": true, 00:12:46.958 "claim_type": "exclusive_write", 00:12:46.958 "zoned": false, 00:12:46.958 "supported_io_types": { 00:12:46.958 "read": true, 00:12:46.958 "write": true, 00:12:46.958 "unmap": true, 00:12:46.958 "flush": true, 00:12:46.958 "reset": true, 00:12:46.958 "nvme_admin": false, 00:12:46.958 "nvme_io": false, 00:12:46.958 "nvme_io_md": false, 00:12:46.958 "write_zeroes": true, 00:12:46.958 "zcopy": true, 00:12:46.958 "get_zone_info": false, 00:12:46.958 "zone_management": false, 00:12:46.958 "zone_append": false, 00:12:46.958 "compare": false, 00:12:46.958 "compare_and_write": false, 00:12:46.958 "abort": true, 00:12:46.958 "seek_hole": false, 00:12:46.958 "seek_data": false, 00:12:46.958 "copy": true, 00:12:46.958 "nvme_iov_md": false 00:12:46.958 }, 00:12:46.958 "memory_domains": [ 00:12:46.958 { 00:12:46.958 "dma_device_id": "system", 00:12:46.958 "dma_device_type": 1 00:12:46.958 }, 00:12:46.958 { 00:12:46.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.958 "dma_device_type": 2 00:12:46.958 } 00:12:46.958 ], 00:12:46.958 "driver_specific": {} 00:12:46.958 } 00:12:46.958 ] 00:12:46.958 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.958 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:46.958 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:46.958 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:46.958 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:46.958 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.218 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.218 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:47.218 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.218 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.218 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.218 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.218 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.218 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.218 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.218 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.218 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.218 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.218 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.218 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.218 "name": "Existed_Raid", 00:12:47.218 "uuid": "2926de00-d04e-428d-9ea2-a7ff4c99650a", 00:12:47.218 "strip_size_kb": 64, 00:12:47.218 "state": "online", 00:12:47.218 "raid_level": "concat", 00:12:47.218 "superblock": true, 00:12:47.218 "num_base_bdevs": 4, 00:12:47.218 "num_base_bdevs_discovered": 4, 00:12:47.218 "num_base_bdevs_operational": 4, 00:12:47.218 "base_bdevs_list": [ 00:12:47.218 { 00:12:47.218 "name": "BaseBdev1", 00:12:47.218 "uuid": "30d13c63-dc71-498f-a293-4e6c813cb02f", 00:12:47.218 "is_configured": true, 00:12:47.218 "data_offset": 2048, 00:12:47.218 "data_size": 63488 00:12:47.218 }, 00:12:47.218 { 00:12:47.218 "name": "BaseBdev2", 00:12:47.218 "uuid": "17521ceb-5702-465d-b56e-3105f1c8e3db", 00:12:47.218 "is_configured": true, 00:12:47.218 "data_offset": 2048, 00:12:47.218 "data_size": 63488 00:12:47.218 }, 00:12:47.218 { 00:12:47.218 "name": "BaseBdev3", 00:12:47.218 "uuid": "be755fbe-a7c5-4606-b62d-0040039f2880", 00:12:47.218 "is_configured": true, 00:12:47.218 "data_offset": 2048, 00:12:47.218 "data_size": 63488 00:12:47.218 }, 00:12:47.218 { 00:12:47.218 "name": "BaseBdev4", 00:12:47.218 "uuid": "e77de800-a3ca-4d64-bf39-29c979078a79", 00:12:47.218 "is_configured": true, 00:12:47.218 "data_offset": 2048, 00:12:47.218 "data_size": 63488 00:12:47.218 } 00:12:47.218 ] 00:12:47.218 }' 00:12:47.218 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.218 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.477 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:47.477 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:47.477 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:47.477 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:47.477 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:47.477 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:47.477 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:47.477 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:47.477 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.477 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.477 [2024-12-06 04:03:40.817844] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:47.737 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.737 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:47.737 "name": "Existed_Raid", 00:12:47.737 "aliases": [ 00:12:47.737 "2926de00-d04e-428d-9ea2-a7ff4c99650a" 00:12:47.737 ], 00:12:47.737 "product_name": "Raid Volume", 00:12:47.737 "block_size": 512, 00:12:47.737 "num_blocks": 253952, 00:12:47.737 "uuid": "2926de00-d04e-428d-9ea2-a7ff4c99650a", 00:12:47.737 "assigned_rate_limits": { 00:12:47.737 "rw_ios_per_sec": 0, 00:12:47.737 "rw_mbytes_per_sec": 0, 00:12:47.737 "r_mbytes_per_sec": 0, 00:12:47.737 "w_mbytes_per_sec": 0 00:12:47.737 }, 00:12:47.737 "claimed": false, 00:12:47.737 "zoned": false, 00:12:47.737 "supported_io_types": { 00:12:47.737 "read": true, 00:12:47.737 "write": true, 00:12:47.737 "unmap": true, 00:12:47.737 "flush": true, 00:12:47.737 "reset": true, 00:12:47.737 "nvme_admin": false, 00:12:47.737 "nvme_io": false, 00:12:47.737 "nvme_io_md": false, 00:12:47.737 "write_zeroes": true, 00:12:47.737 "zcopy": false, 00:12:47.737 "get_zone_info": false, 00:12:47.737 "zone_management": false, 00:12:47.737 "zone_append": false, 00:12:47.737 "compare": false, 00:12:47.737 "compare_and_write": false, 00:12:47.737 "abort": false, 00:12:47.737 "seek_hole": false, 00:12:47.737 "seek_data": false, 00:12:47.737 "copy": false, 00:12:47.737 "nvme_iov_md": false 00:12:47.737 }, 00:12:47.737 "memory_domains": [ 00:12:47.737 { 00:12:47.737 "dma_device_id": "system", 00:12:47.737 "dma_device_type": 1 00:12:47.737 }, 00:12:47.737 { 00:12:47.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.737 "dma_device_type": 2 00:12:47.737 }, 00:12:47.737 { 00:12:47.737 "dma_device_id": "system", 00:12:47.737 "dma_device_type": 1 00:12:47.737 }, 00:12:47.737 { 00:12:47.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.737 "dma_device_type": 2 00:12:47.737 }, 00:12:47.737 { 00:12:47.737 "dma_device_id": "system", 00:12:47.737 "dma_device_type": 1 00:12:47.737 }, 00:12:47.737 { 00:12:47.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.737 "dma_device_type": 2 00:12:47.737 }, 00:12:47.737 { 00:12:47.737 "dma_device_id": "system", 00:12:47.737 "dma_device_type": 1 00:12:47.737 }, 00:12:47.737 { 00:12:47.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.737 "dma_device_type": 2 00:12:47.737 } 00:12:47.737 ], 00:12:47.737 "driver_specific": { 00:12:47.737 "raid": { 00:12:47.737 "uuid": "2926de00-d04e-428d-9ea2-a7ff4c99650a", 00:12:47.737 "strip_size_kb": 64, 00:12:47.737 "state": "online", 00:12:47.737 "raid_level": "concat", 00:12:47.737 "superblock": true, 00:12:47.737 "num_base_bdevs": 4, 00:12:47.737 "num_base_bdevs_discovered": 4, 00:12:47.737 "num_base_bdevs_operational": 4, 00:12:47.737 "base_bdevs_list": [ 00:12:47.737 { 00:12:47.737 "name": "BaseBdev1", 00:12:47.737 "uuid": "30d13c63-dc71-498f-a293-4e6c813cb02f", 00:12:47.737 "is_configured": true, 00:12:47.737 "data_offset": 2048, 00:12:47.737 "data_size": 63488 00:12:47.737 }, 00:12:47.737 { 00:12:47.737 "name": "BaseBdev2", 00:12:47.737 "uuid": "17521ceb-5702-465d-b56e-3105f1c8e3db", 00:12:47.737 "is_configured": true, 00:12:47.737 "data_offset": 2048, 00:12:47.737 "data_size": 63488 00:12:47.737 }, 00:12:47.737 { 00:12:47.737 "name": "BaseBdev3", 00:12:47.737 "uuid": "be755fbe-a7c5-4606-b62d-0040039f2880", 00:12:47.737 "is_configured": true, 00:12:47.737 "data_offset": 2048, 00:12:47.737 "data_size": 63488 00:12:47.737 }, 00:12:47.737 { 00:12:47.737 "name": "BaseBdev4", 00:12:47.737 "uuid": "e77de800-a3ca-4d64-bf39-29c979078a79", 00:12:47.737 "is_configured": true, 00:12:47.737 "data_offset": 2048, 00:12:47.737 "data_size": 63488 00:12:47.737 } 00:12:47.737 ] 00:12:47.737 } 00:12:47.737 } 00:12:47.737 }' 00:12:47.737 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:47.737 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:47.737 BaseBdev2 00:12:47.737 BaseBdev3 00:12:47.737 BaseBdev4' 00:12:47.737 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.737 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:47.737 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:47.737 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.737 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:47.737 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.737 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.737 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.737 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:47.737 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:47.737 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:47.737 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:47.737 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.737 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.737 04:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.737 04:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.737 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:47.737 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:47.737 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:47.737 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:47.737 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.737 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.737 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.737 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.737 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:47.737 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:47.737 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:47.737 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:47.737 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.737 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.737 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.996 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.996 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:47.996 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:47.996 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:47.996 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.996 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.996 [2024-12-06 04:03:41.132997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:47.996 [2024-12-06 04:03:41.133036] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:47.996 [2024-12-06 04:03:41.133103] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:47.996 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.996 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:47.996 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:47.996 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:47.996 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:47.996 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:47.996 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:47.996 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.996 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:47.996 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:47.996 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.996 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.996 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.996 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.996 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.996 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.996 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.997 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.997 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.997 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.997 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.997 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.997 "name": "Existed_Raid", 00:12:47.997 "uuid": "2926de00-d04e-428d-9ea2-a7ff4c99650a", 00:12:47.997 "strip_size_kb": 64, 00:12:47.997 "state": "offline", 00:12:47.997 "raid_level": "concat", 00:12:47.997 "superblock": true, 00:12:47.997 "num_base_bdevs": 4, 00:12:47.997 "num_base_bdevs_discovered": 3, 00:12:47.997 "num_base_bdevs_operational": 3, 00:12:47.997 "base_bdevs_list": [ 00:12:47.997 { 00:12:47.997 "name": null, 00:12:47.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.997 "is_configured": false, 00:12:47.997 "data_offset": 0, 00:12:47.997 "data_size": 63488 00:12:47.997 }, 00:12:47.997 { 00:12:47.997 "name": "BaseBdev2", 00:12:47.997 "uuid": "17521ceb-5702-465d-b56e-3105f1c8e3db", 00:12:47.997 "is_configured": true, 00:12:47.997 "data_offset": 2048, 00:12:47.997 "data_size": 63488 00:12:47.997 }, 00:12:47.997 { 00:12:47.997 "name": "BaseBdev3", 00:12:47.997 "uuid": "be755fbe-a7c5-4606-b62d-0040039f2880", 00:12:47.997 "is_configured": true, 00:12:47.997 "data_offset": 2048, 00:12:47.997 "data_size": 63488 00:12:47.997 }, 00:12:47.997 { 00:12:47.997 "name": "BaseBdev4", 00:12:47.997 "uuid": "e77de800-a3ca-4d64-bf39-29c979078a79", 00:12:47.997 "is_configured": true, 00:12:47.997 "data_offset": 2048, 00:12:47.997 "data_size": 63488 00:12:47.997 } 00:12:47.997 ] 00:12:47.997 }' 00:12:47.997 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.997 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.565 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:48.565 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:48.565 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.565 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.565 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.565 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:48.565 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.565 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:48.565 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:48.565 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:48.565 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.565 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.565 [2024-12-06 04:03:41.776075] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:48.565 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.565 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:48.565 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:48.565 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.565 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.565 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:48.565 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.565 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.824 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:48.824 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:48.824 04:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:48.824 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.824 04:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.824 [2024-12-06 04:03:41.932461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:48.824 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.824 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:48.824 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:48.824 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.824 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.824 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:48.824 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.824 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.824 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:48.824 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:48.824 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:48.824 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.824 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.824 [2024-12-06 04:03:42.094746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:48.824 [2024-12-06 04:03:42.094807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.086 BaseBdev2 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.086 [ 00:12:49.086 { 00:12:49.086 "name": "BaseBdev2", 00:12:49.086 "aliases": [ 00:12:49.086 "4be8c3a6-5054-4f0f-9e86-b0ace3d588b8" 00:12:49.086 ], 00:12:49.086 "product_name": "Malloc disk", 00:12:49.086 "block_size": 512, 00:12:49.086 "num_blocks": 65536, 00:12:49.086 "uuid": "4be8c3a6-5054-4f0f-9e86-b0ace3d588b8", 00:12:49.086 "assigned_rate_limits": { 00:12:49.086 "rw_ios_per_sec": 0, 00:12:49.086 "rw_mbytes_per_sec": 0, 00:12:49.086 "r_mbytes_per_sec": 0, 00:12:49.086 "w_mbytes_per_sec": 0 00:12:49.086 }, 00:12:49.086 "claimed": false, 00:12:49.086 "zoned": false, 00:12:49.086 "supported_io_types": { 00:12:49.086 "read": true, 00:12:49.086 "write": true, 00:12:49.086 "unmap": true, 00:12:49.086 "flush": true, 00:12:49.086 "reset": true, 00:12:49.086 "nvme_admin": false, 00:12:49.086 "nvme_io": false, 00:12:49.086 "nvme_io_md": false, 00:12:49.086 "write_zeroes": true, 00:12:49.086 "zcopy": true, 00:12:49.086 "get_zone_info": false, 00:12:49.086 "zone_management": false, 00:12:49.086 "zone_append": false, 00:12:49.086 "compare": false, 00:12:49.086 "compare_and_write": false, 00:12:49.086 "abort": true, 00:12:49.086 "seek_hole": false, 00:12:49.086 "seek_data": false, 00:12:49.086 "copy": true, 00:12:49.086 "nvme_iov_md": false 00:12:49.086 }, 00:12:49.086 "memory_domains": [ 00:12:49.086 { 00:12:49.086 "dma_device_id": "system", 00:12:49.086 "dma_device_type": 1 00:12:49.086 }, 00:12:49.086 { 00:12:49.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.086 "dma_device_type": 2 00:12:49.086 } 00:12:49.086 ], 00:12:49.086 "driver_specific": {} 00:12:49.086 } 00:12:49.086 ] 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.086 BaseBdev3 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.086 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.086 [ 00:12:49.086 { 00:12:49.086 "name": "BaseBdev3", 00:12:49.086 "aliases": [ 00:12:49.086 "7d485372-05ac-496e-aa85-cd8ad780b6ca" 00:12:49.086 ], 00:12:49.086 "product_name": "Malloc disk", 00:12:49.086 "block_size": 512, 00:12:49.086 "num_blocks": 65536, 00:12:49.086 "uuid": "7d485372-05ac-496e-aa85-cd8ad780b6ca", 00:12:49.086 "assigned_rate_limits": { 00:12:49.086 "rw_ios_per_sec": 0, 00:12:49.086 "rw_mbytes_per_sec": 0, 00:12:49.086 "r_mbytes_per_sec": 0, 00:12:49.086 "w_mbytes_per_sec": 0 00:12:49.086 }, 00:12:49.086 "claimed": false, 00:12:49.086 "zoned": false, 00:12:49.086 "supported_io_types": { 00:12:49.086 "read": true, 00:12:49.086 "write": true, 00:12:49.086 "unmap": true, 00:12:49.086 "flush": true, 00:12:49.086 "reset": true, 00:12:49.086 "nvme_admin": false, 00:12:49.086 "nvme_io": false, 00:12:49.086 "nvme_io_md": false, 00:12:49.086 "write_zeroes": true, 00:12:49.086 "zcopy": true, 00:12:49.086 "get_zone_info": false, 00:12:49.086 "zone_management": false, 00:12:49.086 "zone_append": false, 00:12:49.086 "compare": false, 00:12:49.086 "compare_and_write": false, 00:12:49.086 "abort": true, 00:12:49.086 "seek_hole": false, 00:12:49.086 "seek_data": false, 00:12:49.086 "copy": true, 00:12:49.086 "nvme_iov_md": false 00:12:49.086 }, 00:12:49.086 "memory_domains": [ 00:12:49.086 { 00:12:49.086 "dma_device_id": "system", 00:12:49.086 "dma_device_type": 1 00:12:49.086 }, 00:12:49.086 { 00:12:49.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.087 "dma_device_type": 2 00:12:49.087 } 00:12:49.087 ], 00:12:49.087 "driver_specific": {} 00:12:49.087 } 00:12:49.087 ] 00:12:49.087 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.087 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:49.087 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:49.087 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:49.087 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:49.087 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.087 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.087 BaseBdev4 00:12:49.087 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.087 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:49.087 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:49.087 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:49.087 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.347 [ 00:12:49.347 { 00:12:49.347 "name": "BaseBdev4", 00:12:49.347 "aliases": [ 00:12:49.347 "de691469-c8ee-46f1-9ec5-38dbc7f7bb37" 00:12:49.347 ], 00:12:49.347 "product_name": "Malloc disk", 00:12:49.347 "block_size": 512, 00:12:49.347 "num_blocks": 65536, 00:12:49.347 "uuid": "de691469-c8ee-46f1-9ec5-38dbc7f7bb37", 00:12:49.347 "assigned_rate_limits": { 00:12:49.347 "rw_ios_per_sec": 0, 00:12:49.347 "rw_mbytes_per_sec": 0, 00:12:49.347 "r_mbytes_per_sec": 0, 00:12:49.347 "w_mbytes_per_sec": 0 00:12:49.347 }, 00:12:49.347 "claimed": false, 00:12:49.347 "zoned": false, 00:12:49.347 "supported_io_types": { 00:12:49.347 "read": true, 00:12:49.347 "write": true, 00:12:49.347 "unmap": true, 00:12:49.347 "flush": true, 00:12:49.347 "reset": true, 00:12:49.347 "nvme_admin": false, 00:12:49.347 "nvme_io": false, 00:12:49.347 "nvme_io_md": false, 00:12:49.347 "write_zeroes": true, 00:12:49.347 "zcopy": true, 00:12:49.347 "get_zone_info": false, 00:12:49.347 "zone_management": false, 00:12:49.347 "zone_append": false, 00:12:49.347 "compare": false, 00:12:49.347 "compare_and_write": false, 00:12:49.347 "abort": true, 00:12:49.347 "seek_hole": false, 00:12:49.347 "seek_data": false, 00:12:49.347 "copy": true, 00:12:49.347 "nvme_iov_md": false 00:12:49.347 }, 00:12:49.347 "memory_domains": [ 00:12:49.347 { 00:12:49.347 "dma_device_id": "system", 00:12:49.347 "dma_device_type": 1 00:12:49.347 }, 00:12:49.347 { 00:12:49.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.347 "dma_device_type": 2 00:12:49.347 } 00:12:49.347 ], 00:12:49.347 "driver_specific": {} 00:12:49.347 } 00:12:49.347 ] 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.347 [2024-12-06 04:03:42.483536] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:49.347 [2024-12-06 04:03:42.483583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:49.347 [2024-12-06 04:03:42.483609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.347 [2024-12-06 04:03:42.485476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:49.347 [2024-12-06 04:03:42.485551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.347 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.348 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.348 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.348 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.348 "name": "Existed_Raid", 00:12:49.348 "uuid": "c2b90b4d-bb54-4738-a67e-ac1fe0f2f8da", 00:12:49.348 "strip_size_kb": 64, 00:12:49.348 "state": "configuring", 00:12:49.348 "raid_level": "concat", 00:12:49.348 "superblock": true, 00:12:49.348 "num_base_bdevs": 4, 00:12:49.348 "num_base_bdevs_discovered": 3, 00:12:49.348 "num_base_bdevs_operational": 4, 00:12:49.348 "base_bdevs_list": [ 00:12:49.348 { 00:12:49.348 "name": "BaseBdev1", 00:12:49.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.348 "is_configured": false, 00:12:49.348 "data_offset": 0, 00:12:49.348 "data_size": 0 00:12:49.348 }, 00:12:49.348 { 00:12:49.348 "name": "BaseBdev2", 00:12:49.348 "uuid": "4be8c3a6-5054-4f0f-9e86-b0ace3d588b8", 00:12:49.348 "is_configured": true, 00:12:49.348 "data_offset": 2048, 00:12:49.348 "data_size": 63488 00:12:49.348 }, 00:12:49.348 { 00:12:49.348 "name": "BaseBdev3", 00:12:49.348 "uuid": "7d485372-05ac-496e-aa85-cd8ad780b6ca", 00:12:49.348 "is_configured": true, 00:12:49.348 "data_offset": 2048, 00:12:49.348 "data_size": 63488 00:12:49.348 }, 00:12:49.348 { 00:12:49.348 "name": "BaseBdev4", 00:12:49.348 "uuid": "de691469-c8ee-46f1-9ec5-38dbc7f7bb37", 00:12:49.348 "is_configured": true, 00:12:49.348 "data_offset": 2048, 00:12:49.348 "data_size": 63488 00:12:49.348 } 00:12:49.348 ] 00:12:49.348 }' 00:12:49.348 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.348 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.607 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:49.607 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.607 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.607 [2024-12-06 04:03:42.938773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:49.607 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.607 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:49.607 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.607 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.607 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:49.607 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.607 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.607 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.607 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.607 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.607 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.607 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.607 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.607 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.607 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.867 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.867 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.867 "name": "Existed_Raid", 00:12:49.867 "uuid": "c2b90b4d-bb54-4738-a67e-ac1fe0f2f8da", 00:12:49.867 "strip_size_kb": 64, 00:12:49.867 "state": "configuring", 00:12:49.867 "raid_level": "concat", 00:12:49.867 "superblock": true, 00:12:49.867 "num_base_bdevs": 4, 00:12:49.867 "num_base_bdevs_discovered": 2, 00:12:49.867 "num_base_bdevs_operational": 4, 00:12:49.867 "base_bdevs_list": [ 00:12:49.867 { 00:12:49.867 "name": "BaseBdev1", 00:12:49.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.867 "is_configured": false, 00:12:49.867 "data_offset": 0, 00:12:49.867 "data_size": 0 00:12:49.868 }, 00:12:49.868 { 00:12:49.868 "name": null, 00:12:49.868 "uuid": "4be8c3a6-5054-4f0f-9e86-b0ace3d588b8", 00:12:49.868 "is_configured": false, 00:12:49.868 "data_offset": 0, 00:12:49.868 "data_size": 63488 00:12:49.868 }, 00:12:49.868 { 00:12:49.868 "name": "BaseBdev3", 00:12:49.868 "uuid": "7d485372-05ac-496e-aa85-cd8ad780b6ca", 00:12:49.868 "is_configured": true, 00:12:49.868 "data_offset": 2048, 00:12:49.868 "data_size": 63488 00:12:49.868 }, 00:12:49.868 { 00:12:49.868 "name": "BaseBdev4", 00:12:49.868 "uuid": "de691469-c8ee-46f1-9ec5-38dbc7f7bb37", 00:12:49.868 "is_configured": true, 00:12:49.868 "data_offset": 2048, 00:12:49.868 "data_size": 63488 00:12:49.868 } 00:12:49.868 ] 00:12:49.868 }' 00:12:49.868 04:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.868 04:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.128 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.128 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.128 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:50.128 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.128 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.128 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:50.128 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:50.128 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.128 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.128 [2024-12-06 04:03:43.446803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:50.128 BaseBdev1 00:12:50.128 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.128 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:50.128 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:50.128 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:50.128 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:50.128 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:50.128 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:50.128 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:50.128 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.128 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.128 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.128 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:50.128 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.128 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.128 [ 00:12:50.128 { 00:12:50.128 "name": "BaseBdev1", 00:12:50.128 "aliases": [ 00:12:50.128 "d51222ae-856f-4e8c-b141-24d3c84ec595" 00:12:50.128 ], 00:12:50.128 "product_name": "Malloc disk", 00:12:50.128 "block_size": 512, 00:12:50.128 "num_blocks": 65536, 00:12:50.128 "uuid": "d51222ae-856f-4e8c-b141-24d3c84ec595", 00:12:50.128 "assigned_rate_limits": { 00:12:50.128 "rw_ios_per_sec": 0, 00:12:50.128 "rw_mbytes_per_sec": 0, 00:12:50.128 "r_mbytes_per_sec": 0, 00:12:50.128 "w_mbytes_per_sec": 0 00:12:50.128 }, 00:12:50.128 "claimed": true, 00:12:50.128 "claim_type": "exclusive_write", 00:12:50.128 "zoned": false, 00:12:50.128 "supported_io_types": { 00:12:50.128 "read": true, 00:12:50.128 "write": true, 00:12:50.128 "unmap": true, 00:12:50.128 "flush": true, 00:12:50.128 "reset": true, 00:12:50.128 "nvme_admin": false, 00:12:50.128 "nvme_io": false, 00:12:50.128 "nvme_io_md": false, 00:12:50.128 "write_zeroes": true, 00:12:50.128 "zcopy": true, 00:12:50.128 "get_zone_info": false, 00:12:50.128 "zone_management": false, 00:12:50.128 "zone_append": false, 00:12:50.128 "compare": false, 00:12:50.128 "compare_and_write": false, 00:12:50.128 "abort": true, 00:12:50.128 "seek_hole": false, 00:12:50.128 "seek_data": false, 00:12:50.128 "copy": true, 00:12:50.128 "nvme_iov_md": false 00:12:50.128 }, 00:12:50.128 "memory_domains": [ 00:12:50.128 { 00:12:50.128 "dma_device_id": "system", 00:12:50.128 "dma_device_type": 1 00:12:50.128 }, 00:12:50.128 { 00:12:50.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.128 "dma_device_type": 2 00:12:50.128 } 00:12:50.128 ], 00:12:50.128 "driver_specific": {} 00:12:50.128 } 00:12:50.128 ] 00:12:50.388 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.388 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:50.388 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:50.388 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.388 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.388 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:50.388 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.388 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.388 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.388 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.388 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.388 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.388 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.388 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.388 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.388 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.388 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.388 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.388 "name": "Existed_Raid", 00:12:50.388 "uuid": "c2b90b4d-bb54-4738-a67e-ac1fe0f2f8da", 00:12:50.388 "strip_size_kb": 64, 00:12:50.388 "state": "configuring", 00:12:50.388 "raid_level": "concat", 00:12:50.388 "superblock": true, 00:12:50.388 "num_base_bdevs": 4, 00:12:50.388 "num_base_bdevs_discovered": 3, 00:12:50.388 "num_base_bdevs_operational": 4, 00:12:50.388 "base_bdevs_list": [ 00:12:50.388 { 00:12:50.388 "name": "BaseBdev1", 00:12:50.388 "uuid": "d51222ae-856f-4e8c-b141-24d3c84ec595", 00:12:50.388 "is_configured": true, 00:12:50.388 "data_offset": 2048, 00:12:50.388 "data_size": 63488 00:12:50.388 }, 00:12:50.388 { 00:12:50.388 "name": null, 00:12:50.388 "uuid": "4be8c3a6-5054-4f0f-9e86-b0ace3d588b8", 00:12:50.388 "is_configured": false, 00:12:50.388 "data_offset": 0, 00:12:50.388 "data_size": 63488 00:12:50.388 }, 00:12:50.388 { 00:12:50.388 "name": "BaseBdev3", 00:12:50.388 "uuid": "7d485372-05ac-496e-aa85-cd8ad780b6ca", 00:12:50.388 "is_configured": true, 00:12:50.388 "data_offset": 2048, 00:12:50.388 "data_size": 63488 00:12:50.388 }, 00:12:50.388 { 00:12:50.388 "name": "BaseBdev4", 00:12:50.388 "uuid": "de691469-c8ee-46f1-9ec5-38dbc7f7bb37", 00:12:50.388 "is_configured": true, 00:12:50.388 "data_offset": 2048, 00:12:50.388 "data_size": 63488 00:12:50.388 } 00:12:50.388 ] 00:12:50.388 }' 00:12:50.388 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.388 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.648 [2024-12-06 04:03:43.974005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.648 04:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.907 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.907 "name": "Existed_Raid", 00:12:50.907 "uuid": "c2b90b4d-bb54-4738-a67e-ac1fe0f2f8da", 00:12:50.907 "strip_size_kb": 64, 00:12:50.907 "state": "configuring", 00:12:50.907 "raid_level": "concat", 00:12:50.907 "superblock": true, 00:12:50.907 "num_base_bdevs": 4, 00:12:50.907 "num_base_bdevs_discovered": 2, 00:12:50.907 "num_base_bdevs_operational": 4, 00:12:50.907 "base_bdevs_list": [ 00:12:50.907 { 00:12:50.907 "name": "BaseBdev1", 00:12:50.907 "uuid": "d51222ae-856f-4e8c-b141-24d3c84ec595", 00:12:50.907 "is_configured": true, 00:12:50.907 "data_offset": 2048, 00:12:50.907 "data_size": 63488 00:12:50.907 }, 00:12:50.907 { 00:12:50.907 "name": null, 00:12:50.907 "uuid": "4be8c3a6-5054-4f0f-9e86-b0ace3d588b8", 00:12:50.907 "is_configured": false, 00:12:50.907 "data_offset": 0, 00:12:50.907 "data_size": 63488 00:12:50.907 }, 00:12:50.907 { 00:12:50.907 "name": null, 00:12:50.907 "uuid": "7d485372-05ac-496e-aa85-cd8ad780b6ca", 00:12:50.907 "is_configured": false, 00:12:50.907 "data_offset": 0, 00:12:50.907 "data_size": 63488 00:12:50.907 }, 00:12:50.907 { 00:12:50.907 "name": "BaseBdev4", 00:12:50.907 "uuid": "de691469-c8ee-46f1-9ec5-38dbc7f7bb37", 00:12:50.907 "is_configured": true, 00:12:50.907 "data_offset": 2048, 00:12:50.907 "data_size": 63488 00:12:50.907 } 00:12:50.907 ] 00:12:50.907 }' 00:12:50.907 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.907 04:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.167 [2024-12-06 04:03:44.449202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.167 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.167 "name": "Existed_Raid", 00:12:51.167 "uuid": "c2b90b4d-bb54-4738-a67e-ac1fe0f2f8da", 00:12:51.167 "strip_size_kb": 64, 00:12:51.167 "state": "configuring", 00:12:51.167 "raid_level": "concat", 00:12:51.167 "superblock": true, 00:12:51.167 "num_base_bdevs": 4, 00:12:51.167 "num_base_bdevs_discovered": 3, 00:12:51.167 "num_base_bdevs_operational": 4, 00:12:51.167 "base_bdevs_list": [ 00:12:51.167 { 00:12:51.167 "name": "BaseBdev1", 00:12:51.167 "uuid": "d51222ae-856f-4e8c-b141-24d3c84ec595", 00:12:51.167 "is_configured": true, 00:12:51.167 "data_offset": 2048, 00:12:51.167 "data_size": 63488 00:12:51.167 }, 00:12:51.167 { 00:12:51.167 "name": null, 00:12:51.167 "uuid": "4be8c3a6-5054-4f0f-9e86-b0ace3d588b8", 00:12:51.167 "is_configured": false, 00:12:51.167 "data_offset": 0, 00:12:51.167 "data_size": 63488 00:12:51.167 }, 00:12:51.167 { 00:12:51.167 "name": "BaseBdev3", 00:12:51.167 "uuid": "7d485372-05ac-496e-aa85-cd8ad780b6ca", 00:12:51.167 "is_configured": true, 00:12:51.167 "data_offset": 2048, 00:12:51.167 "data_size": 63488 00:12:51.167 }, 00:12:51.167 { 00:12:51.167 "name": "BaseBdev4", 00:12:51.168 "uuid": "de691469-c8ee-46f1-9ec5-38dbc7f7bb37", 00:12:51.168 "is_configured": true, 00:12:51.168 "data_offset": 2048, 00:12:51.168 "data_size": 63488 00:12:51.168 } 00:12:51.168 ] 00:12:51.168 }' 00:12:51.168 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.168 04:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.736 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.736 04:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.736 04:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.736 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:51.736 04:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.736 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:51.736 04:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:51.736 04:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.736 04:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.736 [2024-12-06 04:03:44.928406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:51.736 04:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.736 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:51.736 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.736 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.736 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:51.736 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.737 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.737 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.737 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.737 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.737 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.737 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.737 04:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.737 04:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.737 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.737 04:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.737 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.737 "name": "Existed_Raid", 00:12:51.737 "uuid": "c2b90b4d-bb54-4738-a67e-ac1fe0f2f8da", 00:12:51.737 "strip_size_kb": 64, 00:12:51.737 "state": "configuring", 00:12:51.737 "raid_level": "concat", 00:12:51.737 "superblock": true, 00:12:51.737 "num_base_bdevs": 4, 00:12:51.737 "num_base_bdevs_discovered": 2, 00:12:51.737 "num_base_bdevs_operational": 4, 00:12:51.737 "base_bdevs_list": [ 00:12:51.737 { 00:12:51.737 "name": null, 00:12:51.737 "uuid": "d51222ae-856f-4e8c-b141-24d3c84ec595", 00:12:51.737 "is_configured": false, 00:12:51.737 "data_offset": 0, 00:12:51.737 "data_size": 63488 00:12:51.737 }, 00:12:51.737 { 00:12:51.737 "name": null, 00:12:51.737 "uuid": "4be8c3a6-5054-4f0f-9e86-b0ace3d588b8", 00:12:51.737 "is_configured": false, 00:12:51.737 "data_offset": 0, 00:12:51.737 "data_size": 63488 00:12:51.737 }, 00:12:51.737 { 00:12:51.737 "name": "BaseBdev3", 00:12:51.737 "uuid": "7d485372-05ac-496e-aa85-cd8ad780b6ca", 00:12:51.737 "is_configured": true, 00:12:51.737 "data_offset": 2048, 00:12:51.737 "data_size": 63488 00:12:51.737 }, 00:12:51.737 { 00:12:51.737 "name": "BaseBdev4", 00:12:51.737 "uuid": "de691469-c8ee-46f1-9ec5-38dbc7f7bb37", 00:12:51.737 "is_configured": true, 00:12:51.737 "data_offset": 2048, 00:12:51.737 "data_size": 63488 00:12:51.737 } 00:12:51.737 ] 00:12:51.737 }' 00:12:51.737 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.737 04:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.308 [2024-12-06 04:03:45.526021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.308 "name": "Existed_Raid", 00:12:52.308 "uuid": "c2b90b4d-bb54-4738-a67e-ac1fe0f2f8da", 00:12:52.308 "strip_size_kb": 64, 00:12:52.308 "state": "configuring", 00:12:52.308 "raid_level": "concat", 00:12:52.308 "superblock": true, 00:12:52.308 "num_base_bdevs": 4, 00:12:52.308 "num_base_bdevs_discovered": 3, 00:12:52.308 "num_base_bdevs_operational": 4, 00:12:52.308 "base_bdevs_list": [ 00:12:52.308 { 00:12:52.308 "name": null, 00:12:52.308 "uuid": "d51222ae-856f-4e8c-b141-24d3c84ec595", 00:12:52.308 "is_configured": false, 00:12:52.308 "data_offset": 0, 00:12:52.308 "data_size": 63488 00:12:52.308 }, 00:12:52.308 { 00:12:52.308 "name": "BaseBdev2", 00:12:52.308 "uuid": "4be8c3a6-5054-4f0f-9e86-b0ace3d588b8", 00:12:52.308 "is_configured": true, 00:12:52.308 "data_offset": 2048, 00:12:52.308 "data_size": 63488 00:12:52.308 }, 00:12:52.308 { 00:12:52.308 "name": "BaseBdev3", 00:12:52.308 "uuid": "7d485372-05ac-496e-aa85-cd8ad780b6ca", 00:12:52.308 "is_configured": true, 00:12:52.308 "data_offset": 2048, 00:12:52.308 "data_size": 63488 00:12:52.308 }, 00:12:52.308 { 00:12:52.308 "name": "BaseBdev4", 00:12:52.308 "uuid": "de691469-c8ee-46f1-9ec5-38dbc7f7bb37", 00:12:52.308 "is_configured": true, 00:12:52.308 "data_offset": 2048, 00:12:52.308 "data_size": 63488 00:12:52.308 } 00:12:52.308 ] 00:12:52.308 }' 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.308 04:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.878 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:52.878 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.878 04:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.878 04:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.878 04:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.878 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:52.878 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.878 04:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.878 04:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.878 04:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:52.878 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.878 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d51222ae-856f-4e8c-b141-24d3c84ec595 00:12:52.878 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.878 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.878 [2024-12-06 04:03:46.072051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:52.878 [2024-12-06 04:03:46.072332] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:52.878 [2024-12-06 04:03:46.072365] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:52.878 [2024-12-06 04:03:46.072669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:52.878 [2024-12-06 04:03:46.072845] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:52.878 [2024-12-06 04:03:46.072866] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:52.878 NewBaseBdev 00:12:52.878 [2024-12-06 04:03:46.073017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.878 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.878 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:52.878 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:52.878 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:52.878 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:52.878 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:52.878 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:52.878 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:52.878 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.878 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.878 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.878 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:52.878 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.878 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.878 [ 00:12:52.878 { 00:12:52.878 "name": "NewBaseBdev", 00:12:52.878 "aliases": [ 00:12:52.878 "d51222ae-856f-4e8c-b141-24d3c84ec595" 00:12:52.878 ], 00:12:52.878 "product_name": "Malloc disk", 00:12:52.878 "block_size": 512, 00:12:52.878 "num_blocks": 65536, 00:12:52.878 "uuid": "d51222ae-856f-4e8c-b141-24d3c84ec595", 00:12:52.878 "assigned_rate_limits": { 00:12:52.878 "rw_ios_per_sec": 0, 00:12:52.878 "rw_mbytes_per_sec": 0, 00:12:52.878 "r_mbytes_per_sec": 0, 00:12:52.878 "w_mbytes_per_sec": 0 00:12:52.878 }, 00:12:52.878 "claimed": true, 00:12:52.878 "claim_type": "exclusive_write", 00:12:52.878 "zoned": false, 00:12:52.878 "supported_io_types": { 00:12:52.878 "read": true, 00:12:52.878 "write": true, 00:12:52.878 "unmap": true, 00:12:52.878 "flush": true, 00:12:52.878 "reset": true, 00:12:52.878 "nvme_admin": false, 00:12:52.878 "nvme_io": false, 00:12:52.878 "nvme_io_md": false, 00:12:52.878 "write_zeroes": true, 00:12:52.878 "zcopy": true, 00:12:52.878 "get_zone_info": false, 00:12:52.878 "zone_management": false, 00:12:52.878 "zone_append": false, 00:12:52.878 "compare": false, 00:12:52.878 "compare_and_write": false, 00:12:52.878 "abort": true, 00:12:52.878 "seek_hole": false, 00:12:52.878 "seek_data": false, 00:12:52.878 "copy": true, 00:12:52.878 "nvme_iov_md": false 00:12:52.878 }, 00:12:52.878 "memory_domains": [ 00:12:52.878 { 00:12:52.878 "dma_device_id": "system", 00:12:52.879 "dma_device_type": 1 00:12:52.879 }, 00:12:52.879 { 00:12:52.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.879 "dma_device_type": 2 00:12:52.879 } 00:12:52.879 ], 00:12:52.879 "driver_specific": {} 00:12:52.879 } 00:12:52.879 ] 00:12:52.879 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.879 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:52.879 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:52.879 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.879 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.879 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:52.879 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.879 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:52.879 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.879 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.879 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.879 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.879 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.879 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.879 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.879 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.879 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.879 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.879 "name": "Existed_Raid", 00:12:52.879 "uuid": "c2b90b4d-bb54-4738-a67e-ac1fe0f2f8da", 00:12:52.879 "strip_size_kb": 64, 00:12:52.879 "state": "online", 00:12:52.879 "raid_level": "concat", 00:12:52.879 "superblock": true, 00:12:52.879 "num_base_bdevs": 4, 00:12:52.879 "num_base_bdevs_discovered": 4, 00:12:52.879 "num_base_bdevs_operational": 4, 00:12:52.879 "base_bdevs_list": [ 00:12:52.879 { 00:12:52.879 "name": "NewBaseBdev", 00:12:52.879 "uuid": "d51222ae-856f-4e8c-b141-24d3c84ec595", 00:12:52.879 "is_configured": true, 00:12:52.879 "data_offset": 2048, 00:12:52.879 "data_size": 63488 00:12:52.879 }, 00:12:52.879 { 00:12:52.879 "name": "BaseBdev2", 00:12:52.879 "uuid": "4be8c3a6-5054-4f0f-9e86-b0ace3d588b8", 00:12:52.879 "is_configured": true, 00:12:52.879 "data_offset": 2048, 00:12:52.879 "data_size": 63488 00:12:52.879 }, 00:12:52.879 { 00:12:52.879 "name": "BaseBdev3", 00:12:52.879 "uuid": "7d485372-05ac-496e-aa85-cd8ad780b6ca", 00:12:52.879 "is_configured": true, 00:12:52.879 "data_offset": 2048, 00:12:52.879 "data_size": 63488 00:12:52.879 }, 00:12:52.879 { 00:12:52.879 "name": "BaseBdev4", 00:12:52.879 "uuid": "de691469-c8ee-46f1-9ec5-38dbc7f7bb37", 00:12:52.879 "is_configured": true, 00:12:52.879 "data_offset": 2048, 00:12:52.879 "data_size": 63488 00:12:52.879 } 00:12:52.879 ] 00:12:52.879 }' 00:12:52.879 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.879 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.447 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:53.447 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:53.447 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:53.447 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:53.447 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:53.447 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:53.447 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:53.447 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.447 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.447 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:53.447 [2024-12-06 04:03:46.515707] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:53.447 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.447 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:53.447 "name": "Existed_Raid", 00:12:53.447 "aliases": [ 00:12:53.447 "c2b90b4d-bb54-4738-a67e-ac1fe0f2f8da" 00:12:53.447 ], 00:12:53.447 "product_name": "Raid Volume", 00:12:53.447 "block_size": 512, 00:12:53.447 "num_blocks": 253952, 00:12:53.447 "uuid": "c2b90b4d-bb54-4738-a67e-ac1fe0f2f8da", 00:12:53.447 "assigned_rate_limits": { 00:12:53.447 "rw_ios_per_sec": 0, 00:12:53.447 "rw_mbytes_per_sec": 0, 00:12:53.447 "r_mbytes_per_sec": 0, 00:12:53.447 "w_mbytes_per_sec": 0 00:12:53.447 }, 00:12:53.447 "claimed": false, 00:12:53.447 "zoned": false, 00:12:53.447 "supported_io_types": { 00:12:53.447 "read": true, 00:12:53.447 "write": true, 00:12:53.447 "unmap": true, 00:12:53.447 "flush": true, 00:12:53.448 "reset": true, 00:12:53.448 "nvme_admin": false, 00:12:53.448 "nvme_io": false, 00:12:53.448 "nvme_io_md": false, 00:12:53.448 "write_zeroes": true, 00:12:53.448 "zcopy": false, 00:12:53.448 "get_zone_info": false, 00:12:53.448 "zone_management": false, 00:12:53.448 "zone_append": false, 00:12:53.448 "compare": false, 00:12:53.448 "compare_and_write": false, 00:12:53.448 "abort": false, 00:12:53.448 "seek_hole": false, 00:12:53.448 "seek_data": false, 00:12:53.448 "copy": false, 00:12:53.448 "nvme_iov_md": false 00:12:53.448 }, 00:12:53.448 "memory_domains": [ 00:12:53.448 { 00:12:53.448 "dma_device_id": "system", 00:12:53.448 "dma_device_type": 1 00:12:53.448 }, 00:12:53.448 { 00:12:53.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.448 "dma_device_type": 2 00:12:53.448 }, 00:12:53.448 { 00:12:53.448 "dma_device_id": "system", 00:12:53.448 "dma_device_type": 1 00:12:53.448 }, 00:12:53.448 { 00:12:53.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.448 "dma_device_type": 2 00:12:53.448 }, 00:12:53.448 { 00:12:53.448 "dma_device_id": "system", 00:12:53.448 "dma_device_type": 1 00:12:53.448 }, 00:12:53.448 { 00:12:53.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.448 "dma_device_type": 2 00:12:53.448 }, 00:12:53.448 { 00:12:53.448 "dma_device_id": "system", 00:12:53.448 "dma_device_type": 1 00:12:53.448 }, 00:12:53.448 { 00:12:53.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.448 "dma_device_type": 2 00:12:53.448 } 00:12:53.448 ], 00:12:53.448 "driver_specific": { 00:12:53.448 "raid": { 00:12:53.448 "uuid": "c2b90b4d-bb54-4738-a67e-ac1fe0f2f8da", 00:12:53.448 "strip_size_kb": 64, 00:12:53.448 "state": "online", 00:12:53.448 "raid_level": "concat", 00:12:53.448 "superblock": true, 00:12:53.448 "num_base_bdevs": 4, 00:12:53.448 "num_base_bdevs_discovered": 4, 00:12:53.448 "num_base_bdevs_operational": 4, 00:12:53.448 "base_bdevs_list": [ 00:12:53.448 { 00:12:53.448 "name": "NewBaseBdev", 00:12:53.448 "uuid": "d51222ae-856f-4e8c-b141-24d3c84ec595", 00:12:53.448 "is_configured": true, 00:12:53.448 "data_offset": 2048, 00:12:53.448 "data_size": 63488 00:12:53.448 }, 00:12:53.448 { 00:12:53.448 "name": "BaseBdev2", 00:12:53.448 "uuid": "4be8c3a6-5054-4f0f-9e86-b0ace3d588b8", 00:12:53.448 "is_configured": true, 00:12:53.448 "data_offset": 2048, 00:12:53.448 "data_size": 63488 00:12:53.448 }, 00:12:53.448 { 00:12:53.448 "name": "BaseBdev3", 00:12:53.448 "uuid": "7d485372-05ac-496e-aa85-cd8ad780b6ca", 00:12:53.448 "is_configured": true, 00:12:53.448 "data_offset": 2048, 00:12:53.448 "data_size": 63488 00:12:53.448 }, 00:12:53.448 { 00:12:53.448 "name": "BaseBdev4", 00:12:53.448 "uuid": "de691469-c8ee-46f1-9ec5-38dbc7f7bb37", 00:12:53.448 "is_configured": true, 00:12:53.448 "data_offset": 2048, 00:12:53.448 "data_size": 63488 00:12:53.448 } 00:12:53.448 ] 00:12:53.448 } 00:12:53.448 } 00:12:53.448 }' 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:53.448 BaseBdev2 00:12:53.448 BaseBdev3 00:12:53.448 BaseBdev4' 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.448 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.708 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:53.708 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:53.708 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:53.708 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.708 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.708 [2024-12-06 04:03:46.826852] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:53.708 [2024-12-06 04:03:46.826893] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:53.708 [2024-12-06 04:03:46.826981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:53.708 [2024-12-06 04:03:46.827061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:53.708 [2024-12-06 04:03:46.827073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:53.708 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.708 04:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72057 00:12:53.708 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72057 ']' 00:12:53.708 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72057 00:12:53.708 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:53.708 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:53.708 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72057 00:12:53.708 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:53.708 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:53.708 killing process with pid 72057 00:12:53.708 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72057' 00:12:53.708 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72057 00:12:53.708 [2024-12-06 04:03:46.873863] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:53.708 04:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72057 00:12:53.967 [2024-12-06 04:03:47.290244] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:55.348 04:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:55.348 00:12:55.348 real 0m11.833s 00:12:55.348 user 0m18.847s 00:12:55.348 sys 0m2.090s 00:12:55.348 04:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.348 04:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.348 ************************************ 00:12:55.348 END TEST raid_state_function_test_sb 00:12:55.348 ************************************ 00:12:55.348 04:03:48 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:12:55.348 04:03:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:55.348 04:03:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.348 04:03:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:55.348 ************************************ 00:12:55.348 START TEST raid_superblock_test 00:12:55.348 ************************************ 00:12:55.348 04:03:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:12:55.348 04:03:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:55.348 04:03:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:55.348 04:03:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:55.348 04:03:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:55.348 04:03:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:55.348 04:03:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:55.348 04:03:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:55.348 04:03:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:55.348 04:03:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:55.348 04:03:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:55.348 04:03:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:55.348 04:03:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:55.348 04:03:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:55.348 04:03:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:55.348 04:03:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:55.348 04:03:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:55.348 04:03:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72733 00:12:55.348 04:03:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:55.349 04:03:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72733 00:12:55.349 04:03:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72733 ']' 00:12:55.349 04:03:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.349 04:03:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.349 04:03:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.349 04:03:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.349 04:03:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.349 [2024-12-06 04:03:48.594946] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:12:55.349 [2024-12-06 04:03:48.595086] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72733 ] 00:12:55.607 [2024-12-06 04:03:48.770967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.607 [2024-12-06 04:03:48.886762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.871 [2024-12-06 04:03:49.093039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.871 [2024-12-06 04:03:49.093118] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.139 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.139 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:56.139 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:56.139 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:56.139 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:56.139 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:56.139 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:56.139 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:56.139 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:56.139 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:56.139 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:56.139 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.139 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.406 malloc1 00:12:56.406 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.406 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:56.406 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.406 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.406 [2024-12-06 04:03:49.510759] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:56.406 [2024-12-06 04:03:49.510834] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.406 [2024-12-06 04:03:49.510860] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:56.406 [2024-12-06 04:03:49.510871] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.406 [2024-12-06 04:03:49.513269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.407 [2024-12-06 04:03:49.513313] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:56.407 pt1 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.407 malloc2 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.407 [2024-12-06 04:03:49.565807] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:56.407 [2024-12-06 04:03:49.565875] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.407 [2024-12-06 04:03:49.565901] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:56.407 [2024-12-06 04:03:49.565911] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.407 [2024-12-06 04:03:49.568235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.407 [2024-12-06 04:03:49.568278] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:56.407 pt2 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.407 malloc3 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.407 [2024-12-06 04:03:49.635937] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:56.407 [2024-12-06 04:03:49.635997] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.407 [2024-12-06 04:03:49.636020] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:56.407 [2024-12-06 04:03:49.636028] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.407 [2024-12-06 04:03:49.638264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.407 [2024-12-06 04:03:49.638303] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:56.407 pt3 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.407 malloc4 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.407 [2024-12-06 04:03:49.695796] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:56.407 [2024-12-06 04:03:49.695860] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.407 [2024-12-06 04:03:49.695882] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:56.407 [2024-12-06 04:03:49.695890] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.407 [2024-12-06 04:03:49.698146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.407 [2024-12-06 04:03:49.698183] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:56.407 pt4 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.407 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.407 [2024-12-06 04:03:49.707815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:56.407 [2024-12-06 04:03:49.709764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:56.407 [2024-12-06 04:03:49.709866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:56.407 [2024-12-06 04:03:49.709929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:56.407 [2024-12-06 04:03:49.710132] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:56.407 [2024-12-06 04:03:49.710152] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:56.407 [2024-12-06 04:03:49.710424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:56.407 [2024-12-06 04:03:49.710596] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:56.408 [2024-12-06 04:03:49.710617] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:56.408 [2024-12-06 04:03:49.710785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.408 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.408 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:56.408 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.408 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.408 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:56.408 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.408 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.408 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.408 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.408 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.408 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.408 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.408 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.408 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.408 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.408 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.667 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.667 "name": "raid_bdev1", 00:12:56.667 "uuid": "321ec132-a046-4189-949a-286d6f63783d", 00:12:56.667 "strip_size_kb": 64, 00:12:56.667 "state": "online", 00:12:56.667 "raid_level": "concat", 00:12:56.667 "superblock": true, 00:12:56.667 "num_base_bdevs": 4, 00:12:56.667 "num_base_bdevs_discovered": 4, 00:12:56.667 "num_base_bdevs_operational": 4, 00:12:56.667 "base_bdevs_list": [ 00:12:56.667 { 00:12:56.667 "name": "pt1", 00:12:56.667 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:56.667 "is_configured": true, 00:12:56.667 "data_offset": 2048, 00:12:56.667 "data_size": 63488 00:12:56.667 }, 00:12:56.667 { 00:12:56.667 "name": "pt2", 00:12:56.667 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.667 "is_configured": true, 00:12:56.667 "data_offset": 2048, 00:12:56.667 "data_size": 63488 00:12:56.667 }, 00:12:56.667 { 00:12:56.667 "name": "pt3", 00:12:56.667 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:56.667 "is_configured": true, 00:12:56.667 "data_offset": 2048, 00:12:56.667 "data_size": 63488 00:12:56.667 }, 00:12:56.667 { 00:12:56.668 "name": "pt4", 00:12:56.668 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:56.668 "is_configured": true, 00:12:56.668 "data_offset": 2048, 00:12:56.668 "data_size": 63488 00:12:56.668 } 00:12:56.668 ] 00:12:56.668 }' 00:12:56.668 04:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.668 04:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.927 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:56.927 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:56.927 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:56.927 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:56.927 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:56.927 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:56.927 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:56.927 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:56.927 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.927 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.927 [2024-12-06 04:03:50.155386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:56.927 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.927 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:56.927 "name": "raid_bdev1", 00:12:56.927 "aliases": [ 00:12:56.927 "321ec132-a046-4189-949a-286d6f63783d" 00:12:56.927 ], 00:12:56.927 "product_name": "Raid Volume", 00:12:56.927 "block_size": 512, 00:12:56.927 "num_blocks": 253952, 00:12:56.927 "uuid": "321ec132-a046-4189-949a-286d6f63783d", 00:12:56.927 "assigned_rate_limits": { 00:12:56.927 "rw_ios_per_sec": 0, 00:12:56.927 "rw_mbytes_per_sec": 0, 00:12:56.927 "r_mbytes_per_sec": 0, 00:12:56.927 "w_mbytes_per_sec": 0 00:12:56.927 }, 00:12:56.927 "claimed": false, 00:12:56.927 "zoned": false, 00:12:56.927 "supported_io_types": { 00:12:56.927 "read": true, 00:12:56.927 "write": true, 00:12:56.927 "unmap": true, 00:12:56.927 "flush": true, 00:12:56.927 "reset": true, 00:12:56.927 "nvme_admin": false, 00:12:56.927 "nvme_io": false, 00:12:56.927 "nvme_io_md": false, 00:12:56.927 "write_zeroes": true, 00:12:56.927 "zcopy": false, 00:12:56.927 "get_zone_info": false, 00:12:56.927 "zone_management": false, 00:12:56.927 "zone_append": false, 00:12:56.927 "compare": false, 00:12:56.927 "compare_and_write": false, 00:12:56.927 "abort": false, 00:12:56.927 "seek_hole": false, 00:12:56.927 "seek_data": false, 00:12:56.927 "copy": false, 00:12:56.927 "nvme_iov_md": false 00:12:56.927 }, 00:12:56.927 "memory_domains": [ 00:12:56.927 { 00:12:56.927 "dma_device_id": "system", 00:12:56.927 "dma_device_type": 1 00:12:56.927 }, 00:12:56.927 { 00:12:56.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.927 "dma_device_type": 2 00:12:56.927 }, 00:12:56.927 { 00:12:56.927 "dma_device_id": "system", 00:12:56.927 "dma_device_type": 1 00:12:56.927 }, 00:12:56.927 { 00:12:56.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.927 "dma_device_type": 2 00:12:56.927 }, 00:12:56.927 { 00:12:56.927 "dma_device_id": "system", 00:12:56.927 "dma_device_type": 1 00:12:56.927 }, 00:12:56.927 { 00:12:56.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.927 "dma_device_type": 2 00:12:56.927 }, 00:12:56.927 { 00:12:56.927 "dma_device_id": "system", 00:12:56.927 "dma_device_type": 1 00:12:56.927 }, 00:12:56.927 { 00:12:56.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.927 "dma_device_type": 2 00:12:56.927 } 00:12:56.927 ], 00:12:56.927 "driver_specific": { 00:12:56.927 "raid": { 00:12:56.927 "uuid": "321ec132-a046-4189-949a-286d6f63783d", 00:12:56.927 "strip_size_kb": 64, 00:12:56.927 "state": "online", 00:12:56.927 "raid_level": "concat", 00:12:56.927 "superblock": true, 00:12:56.927 "num_base_bdevs": 4, 00:12:56.927 "num_base_bdevs_discovered": 4, 00:12:56.927 "num_base_bdevs_operational": 4, 00:12:56.927 "base_bdevs_list": [ 00:12:56.927 { 00:12:56.927 "name": "pt1", 00:12:56.927 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:56.927 "is_configured": true, 00:12:56.927 "data_offset": 2048, 00:12:56.927 "data_size": 63488 00:12:56.927 }, 00:12:56.927 { 00:12:56.927 "name": "pt2", 00:12:56.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.927 "is_configured": true, 00:12:56.927 "data_offset": 2048, 00:12:56.927 "data_size": 63488 00:12:56.927 }, 00:12:56.927 { 00:12:56.927 "name": "pt3", 00:12:56.927 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:56.927 "is_configured": true, 00:12:56.927 "data_offset": 2048, 00:12:56.927 "data_size": 63488 00:12:56.927 }, 00:12:56.927 { 00:12:56.927 "name": "pt4", 00:12:56.927 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:56.927 "is_configured": true, 00:12:56.927 "data_offset": 2048, 00:12:56.927 "data_size": 63488 00:12:56.927 } 00:12:56.927 ] 00:12:56.927 } 00:12:56.927 } 00:12:56.927 }' 00:12:56.927 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:56.927 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:56.927 pt2 00:12:56.927 pt3 00:12:56.927 pt4' 00:12:56.927 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.927 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:56.928 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.928 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:56.928 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.928 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.928 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:57.187 [2024-12-06 04:03:50.466824] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=321ec132-a046-4189-949a-286d6f63783d 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 321ec132-a046-4189-949a-286d6f63783d ']' 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.187 [2024-12-06 04:03:50.514394] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:57.187 [2024-12-06 04:03:50.514422] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:57.187 [2024-12-06 04:03:50.514519] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.187 [2024-12-06 04:03:50.514598] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:57.187 [2024-12-06 04:03:50.514627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.187 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.446 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.446 [2024-12-06 04:03:50.678239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:57.446 [2024-12-06 04:03:50.680394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:57.446 [2024-12-06 04:03:50.680456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:57.446 [2024-12-06 04:03:50.680495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:57.446 [2024-12-06 04:03:50.680554] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:57.446 [2024-12-06 04:03:50.680614] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:57.446 [2024-12-06 04:03:50.680638] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:57.446 [2024-12-06 04:03:50.680660] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:57.447 [2024-12-06 04:03:50.680675] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:57.447 [2024-12-06 04:03:50.680688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:57.447 request: 00:12:57.447 { 00:12:57.447 "name": "raid_bdev1", 00:12:57.447 "raid_level": "concat", 00:12:57.447 "base_bdevs": [ 00:12:57.447 "malloc1", 00:12:57.447 "malloc2", 00:12:57.447 "malloc3", 00:12:57.447 "malloc4" 00:12:57.447 ], 00:12:57.447 "strip_size_kb": 64, 00:12:57.447 "superblock": false, 00:12:57.447 "method": "bdev_raid_create", 00:12:57.447 "req_id": 1 00:12:57.447 } 00:12:57.447 Got JSON-RPC error response 00:12:57.447 response: 00:12:57.447 { 00:12:57.447 "code": -17, 00:12:57.447 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:57.447 } 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.447 [2024-12-06 04:03:50.726079] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:57.447 [2024-12-06 04:03:50.726141] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.447 [2024-12-06 04:03:50.726164] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:57.447 [2024-12-06 04:03:50.726177] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.447 [2024-12-06 04:03:50.728676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.447 [2024-12-06 04:03:50.728724] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:57.447 [2024-12-06 04:03:50.728818] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:57.447 [2024-12-06 04:03:50.728889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:57.447 pt1 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.447 "name": "raid_bdev1", 00:12:57.447 "uuid": "321ec132-a046-4189-949a-286d6f63783d", 00:12:57.447 "strip_size_kb": 64, 00:12:57.447 "state": "configuring", 00:12:57.447 "raid_level": "concat", 00:12:57.447 "superblock": true, 00:12:57.447 "num_base_bdevs": 4, 00:12:57.447 "num_base_bdevs_discovered": 1, 00:12:57.447 "num_base_bdevs_operational": 4, 00:12:57.447 "base_bdevs_list": [ 00:12:57.447 { 00:12:57.447 "name": "pt1", 00:12:57.447 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.447 "is_configured": true, 00:12:57.447 "data_offset": 2048, 00:12:57.447 "data_size": 63488 00:12:57.447 }, 00:12:57.447 { 00:12:57.447 "name": null, 00:12:57.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.447 "is_configured": false, 00:12:57.447 "data_offset": 2048, 00:12:57.447 "data_size": 63488 00:12:57.447 }, 00:12:57.447 { 00:12:57.447 "name": null, 00:12:57.447 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:57.447 "is_configured": false, 00:12:57.447 "data_offset": 2048, 00:12:57.447 "data_size": 63488 00:12:57.447 }, 00:12:57.447 { 00:12:57.447 "name": null, 00:12:57.447 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:57.447 "is_configured": false, 00:12:57.447 "data_offset": 2048, 00:12:57.447 "data_size": 63488 00:12:57.447 } 00:12:57.447 ] 00:12:57.447 }' 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.447 04:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.013 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:58.013 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:58.013 04:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.013 04:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.013 [2024-12-06 04:03:51.205296] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:58.013 [2024-12-06 04:03:51.205388] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.013 [2024-12-06 04:03:51.205411] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:58.013 [2024-12-06 04:03:51.205424] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.013 [2024-12-06 04:03:51.205917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.013 [2024-12-06 04:03:51.205952] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:58.013 [2024-12-06 04:03:51.206056] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:58.014 [2024-12-06 04:03:51.206095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:58.014 pt2 00:12:58.014 04:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.014 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:58.014 04:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.014 04:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.014 [2024-12-06 04:03:51.217304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:58.014 04:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.014 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:58.014 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.014 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.014 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:58.014 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.014 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.014 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.014 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.014 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.014 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.014 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.014 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.014 04:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.014 04:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.014 04:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.014 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.014 "name": "raid_bdev1", 00:12:58.014 "uuid": "321ec132-a046-4189-949a-286d6f63783d", 00:12:58.014 "strip_size_kb": 64, 00:12:58.014 "state": "configuring", 00:12:58.014 "raid_level": "concat", 00:12:58.014 "superblock": true, 00:12:58.014 "num_base_bdevs": 4, 00:12:58.014 "num_base_bdevs_discovered": 1, 00:12:58.014 "num_base_bdevs_operational": 4, 00:12:58.014 "base_bdevs_list": [ 00:12:58.014 { 00:12:58.014 "name": "pt1", 00:12:58.014 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:58.014 "is_configured": true, 00:12:58.014 "data_offset": 2048, 00:12:58.014 "data_size": 63488 00:12:58.014 }, 00:12:58.014 { 00:12:58.014 "name": null, 00:12:58.014 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:58.014 "is_configured": false, 00:12:58.014 "data_offset": 0, 00:12:58.014 "data_size": 63488 00:12:58.014 }, 00:12:58.014 { 00:12:58.014 "name": null, 00:12:58.014 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:58.014 "is_configured": false, 00:12:58.014 "data_offset": 2048, 00:12:58.014 "data_size": 63488 00:12:58.014 }, 00:12:58.014 { 00:12:58.014 "name": null, 00:12:58.014 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:58.014 "is_configured": false, 00:12:58.014 "data_offset": 2048, 00:12:58.014 "data_size": 63488 00:12:58.014 } 00:12:58.014 ] 00:12:58.014 }' 00:12:58.014 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.014 04:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.582 [2024-12-06 04:03:51.712518] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:58.582 [2024-12-06 04:03:51.712594] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.582 [2024-12-06 04:03:51.712616] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:58.582 [2024-12-06 04:03:51.712628] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.582 [2024-12-06 04:03:51.713123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.582 [2024-12-06 04:03:51.713152] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:58.582 [2024-12-06 04:03:51.713245] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:58.582 [2024-12-06 04:03:51.713273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:58.582 pt2 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.582 [2024-12-06 04:03:51.720471] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:58.582 [2024-12-06 04:03:51.720529] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.582 [2024-12-06 04:03:51.720549] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:58.582 [2024-12-06 04:03:51.720558] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.582 [2024-12-06 04:03:51.720981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.582 [2024-12-06 04:03:51.721013] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:58.582 [2024-12-06 04:03:51.721102] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:58.582 [2024-12-06 04:03:51.721132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:58.582 pt3 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.582 [2024-12-06 04:03:51.728417] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:58.582 [2024-12-06 04:03:51.728464] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.582 [2024-12-06 04:03:51.728481] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:58.582 [2024-12-06 04:03:51.728490] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.582 [2024-12-06 04:03:51.728873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.582 [2024-12-06 04:03:51.728912] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:58.582 [2024-12-06 04:03:51.728986] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:58.582 [2024-12-06 04:03:51.729020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:58.582 [2024-12-06 04:03:51.729194] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:58.582 [2024-12-06 04:03:51.729210] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:58.582 [2024-12-06 04:03:51.729473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:58.582 [2024-12-06 04:03:51.729643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:58.582 [2024-12-06 04:03:51.729665] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:58.582 [2024-12-06 04:03:51.729827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.582 pt4 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.582 "name": "raid_bdev1", 00:12:58.582 "uuid": "321ec132-a046-4189-949a-286d6f63783d", 00:12:58.582 "strip_size_kb": 64, 00:12:58.582 "state": "online", 00:12:58.582 "raid_level": "concat", 00:12:58.582 "superblock": true, 00:12:58.582 "num_base_bdevs": 4, 00:12:58.582 "num_base_bdevs_discovered": 4, 00:12:58.582 "num_base_bdevs_operational": 4, 00:12:58.582 "base_bdevs_list": [ 00:12:58.582 { 00:12:58.582 "name": "pt1", 00:12:58.582 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:58.582 "is_configured": true, 00:12:58.582 "data_offset": 2048, 00:12:58.582 "data_size": 63488 00:12:58.582 }, 00:12:58.582 { 00:12:58.582 "name": "pt2", 00:12:58.582 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:58.582 "is_configured": true, 00:12:58.582 "data_offset": 2048, 00:12:58.582 "data_size": 63488 00:12:58.582 }, 00:12:58.582 { 00:12:58.582 "name": "pt3", 00:12:58.582 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:58.582 "is_configured": true, 00:12:58.582 "data_offset": 2048, 00:12:58.582 "data_size": 63488 00:12:58.582 }, 00:12:58.582 { 00:12:58.582 "name": "pt4", 00:12:58.582 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:58.582 "is_configured": true, 00:12:58.582 "data_offset": 2048, 00:12:58.582 "data_size": 63488 00:12:58.582 } 00:12:58.582 ] 00:12:58.582 }' 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.582 04:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.842 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:58.842 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:58.842 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:58.842 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:58.842 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:58.842 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:58.842 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:58.842 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.842 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.842 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:58.842 [2024-12-06 04:03:52.180017] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.102 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.102 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:59.102 "name": "raid_bdev1", 00:12:59.102 "aliases": [ 00:12:59.102 "321ec132-a046-4189-949a-286d6f63783d" 00:12:59.102 ], 00:12:59.102 "product_name": "Raid Volume", 00:12:59.102 "block_size": 512, 00:12:59.102 "num_blocks": 253952, 00:12:59.102 "uuid": "321ec132-a046-4189-949a-286d6f63783d", 00:12:59.102 "assigned_rate_limits": { 00:12:59.102 "rw_ios_per_sec": 0, 00:12:59.102 "rw_mbytes_per_sec": 0, 00:12:59.102 "r_mbytes_per_sec": 0, 00:12:59.102 "w_mbytes_per_sec": 0 00:12:59.102 }, 00:12:59.102 "claimed": false, 00:12:59.102 "zoned": false, 00:12:59.102 "supported_io_types": { 00:12:59.102 "read": true, 00:12:59.102 "write": true, 00:12:59.102 "unmap": true, 00:12:59.102 "flush": true, 00:12:59.102 "reset": true, 00:12:59.102 "nvme_admin": false, 00:12:59.102 "nvme_io": false, 00:12:59.102 "nvme_io_md": false, 00:12:59.102 "write_zeroes": true, 00:12:59.102 "zcopy": false, 00:12:59.102 "get_zone_info": false, 00:12:59.102 "zone_management": false, 00:12:59.102 "zone_append": false, 00:12:59.102 "compare": false, 00:12:59.102 "compare_and_write": false, 00:12:59.102 "abort": false, 00:12:59.102 "seek_hole": false, 00:12:59.102 "seek_data": false, 00:12:59.102 "copy": false, 00:12:59.102 "nvme_iov_md": false 00:12:59.102 }, 00:12:59.102 "memory_domains": [ 00:12:59.102 { 00:12:59.102 "dma_device_id": "system", 00:12:59.102 "dma_device_type": 1 00:12:59.102 }, 00:12:59.102 { 00:12:59.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.102 "dma_device_type": 2 00:12:59.102 }, 00:12:59.102 { 00:12:59.102 "dma_device_id": "system", 00:12:59.102 "dma_device_type": 1 00:12:59.102 }, 00:12:59.102 { 00:12:59.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.102 "dma_device_type": 2 00:12:59.102 }, 00:12:59.102 { 00:12:59.102 "dma_device_id": "system", 00:12:59.102 "dma_device_type": 1 00:12:59.102 }, 00:12:59.102 { 00:12:59.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.102 "dma_device_type": 2 00:12:59.102 }, 00:12:59.102 { 00:12:59.102 "dma_device_id": "system", 00:12:59.102 "dma_device_type": 1 00:12:59.102 }, 00:12:59.102 { 00:12:59.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.102 "dma_device_type": 2 00:12:59.102 } 00:12:59.102 ], 00:12:59.102 "driver_specific": { 00:12:59.102 "raid": { 00:12:59.102 "uuid": "321ec132-a046-4189-949a-286d6f63783d", 00:12:59.102 "strip_size_kb": 64, 00:12:59.102 "state": "online", 00:12:59.102 "raid_level": "concat", 00:12:59.102 "superblock": true, 00:12:59.102 "num_base_bdevs": 4, 00:12:59.102 "num_base_bdevs_discovered": 4, 00:12:59.102 "num_base_bdevs_operational": 4, 00:12:59.102 "base_bdevs_list": [ 00:12:59.102 { 00:12:59.102 "name": "pt1", 00:12:59.102 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:59.102 "is_configured": true, 00:12:59.102 "data_offset": 2048, 00:12:59.102 "data_size": 63488 00:12:59.102 }, 00:12:59.102 { 00:12:59.102 "name": "pt2", 00:12:59.102 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:59.102 "is_configured": true, 00:12:59.102 "data_offset": 2048, 00:12:59.102 "data_size": 63488 00:12:59.102 }, 00:12:59.102 { 00:12:59.102 "name": "pt3", 00:12:59.102 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:59.102 "is_configured": true, 00:12:59.102 "data_offset": 2048, 00:12:59.102 "data_size": 63488 00:12:59.102 }, 00:12:59.102 { 00:12:59.102 "name": "pt4", 00:12:59.102 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:59.103 "is_configured": true, 00:12:59.103 "data_offset": 2048, 00:12:59.103 "data_size": 63488 00:12:59.103 } 00:12:59.103 ] 00:12:59.103 } 00:12:59.103 } 00:12:59.103 }' 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:59.103 pt2 00:12:59.103 pt3 00:12:59.103 pt4' 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.103 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.365 [2024-12-06 04:03:52.515436] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 321ec132-a046-4189-949a-286d6f63783d '!=' 321ec132-a046-4189-949a-286d6f63783d ']' 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72733 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72733 ']' 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72733 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72733 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:59.365 killing process with pid 72733 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72733' 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72733 00:12:59.365 04:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72733 00:12:59.365 [2024-12-06 04:03:52.581870] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:59.365 [2024-12-06 04:03:52.581979] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.365 [2024-12-06 04:03:52.582062] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.365 [2024-12-06 04:03:52.582075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:59.937 [2024-12-06 04:03:53.015179] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:01.318 04:03:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:01.318 00:13:01.318 real 0m5.771s 00:13:01.318 user 0m8.218s 00:13:01.318 sys 0m0.984s 00:13:01.318 04:03:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.318 04:03:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.318 ************************************ 00:13:01.318 END TEST raid_superblock_test 00:13:01.318 ************************************ 00:13:01.318 04:03:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:13:01.318 04:03:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:01.318 04:03:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:01.318 04:03:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:01.318 ************************************ 00:13:01.318 START TEST raid_read_error_test 00:13:01.318 ************************************ 00:13:01.318 04:03:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:13:01.318 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:01.318 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:01.318 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:01.318 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:01.318 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:01.318 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:01.318 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:01.318 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:01.318 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:01.318 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:01.318 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:01.318 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:01.318 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:01.318 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:01.318 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:01.318 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:01.318 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:01.318 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:01.318 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:01.318 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:01.319 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:01.319 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:01.319 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:01.319 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:01.319 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:01.319 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:01.319 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:01.319 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:01.319 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.47aaXncE2b 00:13:01.319 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72992 00:13:01.319 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72992 00:13:01.319 04:03:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:01.319 04:03:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72992 ']' 00:13:01.319 04:03:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.319 04:03:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:01.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.319 04:03:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.319 04:03:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:01.319 04:03:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.319 [2024-12-06 04:03:54.467910] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:13:01.319 [2024-12-06 04:03:54.468063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72992 ] 00:13:01.319 [2024-12-06 04:03:54.649043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.578 [2024-12-06 04:03:54.784452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.838 [2024-12-06 04:03:55.021537] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.838 [2024-12-06 04:03:55.021592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:02.097 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.097 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:02.097 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:02.097 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:02.097 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.097 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.097 BaseBdev1_malloc 00:13:02.097 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.097 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:02.097 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.097 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.097 true 00:13:02.097 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.097 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:02.097 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.097 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.097 [2024-12-06 04:03:55.439988] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:02.097 [2024-12-06 04:03:55.440070] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.097 [2024-12-06 04:03:55.440098] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:02.097 [2024-12-06 04:03:55.440111] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.097 [2024-12-06 04:03:55.442612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.097 [2024-12-06 04:03:55.442676] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:02.097 BaseBdev1 00:13:02.097 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.097 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:02.097 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:02.097 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.097 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.357 BaseBdev2_malloc 00:13:02.357 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.357 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:02.357 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.357 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.357 true 00:13:02.357 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.357 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:02.357 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.357 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.357 [2024-12-06 04:03:55.514307] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:02.357 [2024-12-06 04:03:55.514371] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.357 [2024-12-06 04:03:55.514391] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:02.357 [2024-12-06 04:03:55.514402] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.357 [2024-12-06 04:03:55.517021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.357 [2024-12-06 04:03:55.517081] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:02.357 BaseBdev2 00:13:02.357 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.357 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:02.357 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:02.357 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.357 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.357 BaseBdev3_malloc 00:13:02.357 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.357 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:02.357 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.357 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.357 true 00:13:02.357 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.357 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:02.357 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.357 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.357 [2024-12-06 04:03:55.600891] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:02.357 [2024-12-06 04:03:55.600958] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.357 [2024-12-06 04:03:55.600981] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:02.358 [2024-12-06 04:03:55.600993] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.358 [2024-12-06 04:03:55.603572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.358 [2024-12-06 04:03:55.603619] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:02.358 BaseBdev3 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.358 BaseBdev4_malloc 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.358 true 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.358 [2024-12-06 04:03:55.674229] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:02.358 [2024-12-06 04:03:55.674350] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.358 [2024-12-06 04:03:55.674378] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:02.358 [2024-12-06 04:03:55.674391] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.358 [2024-12-06 04:03:55.676919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.358 [2024-12-06 04:03:55.676977] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:02.358 BaseBdev4 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.358 [2024-12-06 04:03:55.686297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:02.358 [2024-12-06 04:03:55.688284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.358 [2024-12-06 04:03:55.688383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:02.358 [2024-12-06 04:03:55.688458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:02.358 [2024-12-06 04:03:55.688785] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:02.358 [2024-12-06 04:03:55.688807] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:02.358 [2024-12-06 04:03:55.689131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:02.358 [2024-12-06 04:03:55.689338] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:02.358 [2024-12-06 04:03:55.689352] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:02.358 [2024-12-06 04:03:55.689558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.358 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.617 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.617 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.617 "name": "raid_bdev1", 00:13:02.618 "uuid": "d8d536ad-f811-448f-9fb3-17de62734c2d", 00:13:02.618 "strip_size_kb": 64, 00:13:02.618 "state": "online", 00:13:02.618 "raid_level": "concat", 00:13:02.618 "superblock": true, 00:13:02.618 "num_base_bdevs": 4, 00:13:02.618 "num_base_bdevs_discovered": 4, 00:13:02.618 "num_base_bdevs_operational": 4, 00:13:02.618 "base_bdevs_list": [ 00:13:02.618 { 00:13:02.618 "name": "BaseBdev1", 00:13:02.618 "uuid": "18072b0a-504d-5362-b7d0-40303e2cf61b", 00:13:02.618 "is_configured": true, 00:13:02.618 "data_offset": 2048, 00:13:02.618 "data_size": 63488 00:13:02.618 }, 00:13:02.618 { 00:13:02.618 "name": "BaseBdev2", 00:13:02.618 "uuid": "2d83620e-bdd9-5145-8f39-1e110c217aac", 00:13:02.618 "is_configured": true, 00:13:02.618 "data_offset": 2048, 00:13:02.618 "data_size": 63488 00:13:02.618 }, 00:13:02.618 { 00:13:02.618 "name": "BaseBdev3", 00:13:02.618 "uuid": "d752af7f-15db-56da-9e56-8457950de06f", 00:13:02.618 "is_configured": true, 00:13:02.618 "data_offset": 2048, 00:13:02.618 "data_size": 63488 00:13:02.618 }, 00:13:02.618 { 00:13:02.618 "name": "BaseBdev4", 00:13:02.618 "uuid": "489db3a4-09ad-550e-81bf-d11fec6949e9", 00:13:02.618 "is_configured": true, 00:13:02.618 "data_offset": 2048, 00:13:02.618 "data_size": 63488 00:13:02.618 } 00:13:02.618 ] 00:13:02.618 }' 00:13:02.618 04:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.618 04:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.877 04:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:02.877 04:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:03.138 [2024-12-06 04:03:56.254772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.077 "name": "raid_bdev1", 00:13:04.077 "uuid": "d8d536ad-f811-448f-9fb3-17de62734c2d", 00:13:04.077 "strip_size_kb": 64, 00:13:04.077 "state": "online", 00:13:04.077 "raid_level": "concat", 00:13:04.077 "superblock": true, 00:13:04.077 "num_base_bdevs": 4, 00:13:04.077 "num_base_bdevs_discovered": 4, 00:13:04.077 "num_base_bdevs_operational": 4, 00:13:04.077 "base_bdevs_list": [ 00:13:04.077 { 00:13:04.077 "name": "BaseBdev1", 00:13:04.077 "uuid": "18072b0a-504d-5362-b7d0-40303e2cf61b", 00:13:04.077 "is_configured": true, 00:13:04.077 "data_offset": 2048, 00:13:04.077 "data_size": 63488 00:13:04.077 }, 00:13:04.077 { 00:13:04.077 "name": "BaseBdev2", 00:13:04.077 "uuid": "2d83620e-bdd9-5145-8f39-1e110c217aac", 00:13:04.077 "is_configured": true, 00:13:04.077 "data_offset": 2048, 00:13:04.077 "data_size": 63488 00:13:04.077 }, 00:13:04.077 { 00:13:04.077 "name": "BaseBdev3", 00:13:04.077 "uuid": "d752af7f-15db-56da-9e56-8457950de06f", 00:13:04.077 "is_configured": true, 00:13:04.077 "data_offset": 2048, 00:13:04.077 "data_size": 63488 00:13:04.077 }, 00:13:04.077 { 00:13:04.077 "name": "BaseBdev4", 00:13:04.077 "uuid": "489db3a4-09ad-550e-81bf-d11fec6949e9", 00:13:04.077 "is_configured": true, 00:13:04.077 "data_offset": 2048, 00:13:04.077 "data_size": 63488 00:13:04.077 } 00:13:04.077 ] 00:13:04.077 }' 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.077 04:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.339 04:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:04.339 04:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.339 04:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.339 [2024-12-06 04:03:57.620247] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:04.339 [2024-12-06 04:03:57.620292] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:04.339 [2024-12-06 04:03:57.623556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:04.339 [2024-12-06 04:03:57.623630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.339 [2024-12-06 04:03:57.623682] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:04.339 [2024-12-06 04:03:57.623696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:04.339 { 00:13:04.339 "results": [ 00:13:04.339 { 00:13:04.339 "job": "raid_bdev1", 00:13:04.339 "core_mask": "0x1", 00:13:04.339 "workload": "randrw", 00:13:04.339 "percentage": 50, 00:13:04.339 "status": "finished", 00:13:04.339 "queue_depth": 1, 00:13:04.339 "io_size": 131072, 00:13:04.339 "runtime": 1.366076, 00:13:04.339 "iops": 13005.13295014333, 00:13:04.339 "mibps": 1625.6416187679163, 00:13:04.339 "io_failed": 1, 00:13:04.339 "io_timeout": 0, 00:13:04.339 "avg_latency_us": 106.46751676173112, 00:13:04.339 "min_latency_us": 27.165065502183406, 00:13:04.339 "max_latency_us": 1802.955458515284 00:13:04.339 } 00:13:04.339 ], 00:13:04.339 "core_count": 1 00:13:04.339 } 00:13:04.339 04:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.339 04:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72992 00:13:04.339 04:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72992 ']' 00:13:04.339 04:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72992 00:13:04.339 04:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:04.339 04:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:04.339 04:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72992 00:13:04.339 04:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:04.339 04:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:04.339 killing process with pid 72992 00:13:04.339 04:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72992' 00:13:04.339 04:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72992 00:13:04.339 [2024-12-06 04:03:57.669977] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:04.339 04:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72992 00:13:04.908 [2024-12-06 04:03:58.039352] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:06.289 04:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.47aaXncE2b 00:13:06.289 04:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:06.289 04:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:06.289 04:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:13:06.289 04:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:06.289 04:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:06.289 04:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:06.289 04:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:13:06.289 00:13:06.289 real 0m5.059s 00:13:06.289 user 0m5.992s 00:13:06.289 sys 0m0.613s 00:13:06.289 04:03:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.289 04:03:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.289 ************************************ 00:13:06.289 END TEST raid_read_error_test 00:13:06.289 ************************************ 00:13:06.289 04:03:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:13:06.289 04:03:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:06.289 04:03:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.289 04:03:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:06.289 ************************************ 00:13:06.289 START TEST raid_write_error_test 00:13:06.289 ************************************ 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.KQjod6lTGv 00:13:06.289 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73143 00:13:06.290 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73143 00:13:06.290 04:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:06.290 04:03:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73143 ']' 00:13:06.290 04:03:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.290 04:03:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.290 04:03:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.290 04:03:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.290 04:03:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.290 [2024-12-06 04:03:59.576373] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:13:06.290 [2024-12-06 04:03:59.576516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73143 ] 00:13:06.550 [2024-12-06 04:03:59.759091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.550 [2024-12-06 04:03:59.893312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.809 [2024-12-06 04:04:00.114626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.809 [2024-12-06 04:04:00.114692] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.380 BaseBdev1_malloc 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.380 true 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.380 [2024-12-06 04:04:00.542086] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:07.380 [2024-12-06 04:04:00.542154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.380 [2024-12-06 04:04:00.542180] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:07.380 [2024-12-06 04:04:00.542194] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.380 [2024-12-06 04:04:00.544679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.380 [2024-12-06 04:04:00.544729] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:07.380 BaseBdev1 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.380 BaseBdev2_malloc 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.380 true 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.380 [2024-12-06 04:04:00.615140] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:07.380 [2024-12-06 04:04:00.615218] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.380 [2024-12-06 04:04:00.615244] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:07.380 [2024-12-06 04:04:00.615259] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.380 [2024-12-06 04:04:00.617750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.380 [2024-12-06 04:04:00.617802] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:07.380 BaseBdev2 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.380 BaseBdev3_malloc 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.380 true 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.380 [2024-12-06 04:04:00.702347] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:07.380 [2024-12-06 04:04:00.702420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.380 [2024-12-06 04:04:00.702446] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:07.380 [2024-12-06 04:04:00.702459] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.380 [2024-12-06 04:04:00.704985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.380 [2024-12-06 04:04:00.705039] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:07.380 BaseBdev3 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.380 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:07.381 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:07.381 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.381 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.641 BaseBdev4_malloc 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.641 true 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.641 [2024-12-06 04:04:00.776320] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:07.641 [2024-12-06 04:04:00.776402] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.641 [2024-12-06 04:04:00.776432] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:07.641 [2024-12-06 04:04:00.776447] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.641 [2024-12-06 04:04:00.778941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.641 [2024-12-06 04:04:00.778995] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:07.641 BaseBdev4 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.641 [2024-12-06 04:04:00.788377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:07.641 [2024-12-06 04:04:00.790516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:07.641 [2024-12-06 04:04:00.790614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:07.641 [2024-12-06 04:04:00.790689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:07.641 [2024-12-06 04:04:00.790971] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:07.641 [2024-12-06 04:04:00.791000] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:07.641 [2024-12-06 04:04:00.791341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:07.641 [2024-12-06 04:04:00.791550] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:07.641 [2024-12-06 04:04:00.791569] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:07.641 [2024-12-06 04:04:00.791768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.641 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.642 "name": "raid_bdev1", 00:13:07.642 "uuid": "b4eec2ea-9064-4545-816c-d3aba1199d02", 00:13:07.642 "strip_size_kb": 64, 00:13:07.642 "state": "online", 00:13:07.642 "raid_level": "concat", 00:13:07.642 "superblock": true, 00:13:07.642 "num_base_bdevs": 4, 00:13:07.642 "num_base_bdevs_discovered": 4, 00:13:07.642 "num_base_bdevs_operational": 4, 00:13:07.642 "base_bdevs_list": [ 00:13:07.642 { 00:13:07.642 "name": "BaseBdev1", 00:13:07.642 "uuid": "bfb93a62-80cf-5185-8f34-0bdc99e3feb9", 00:13:07.642 "is_configured": true, 00:13:07.642 "data_offset": 2048, 00:13:07.642 "data_size": 63488 00:13:07.642 }, 00:13:07.642 { 00:13:07.642 "name": "BaseBdev2", 00:13:07.642 "uuid": "398ef0d8-1348-541b-98b9-41fb58eda4fc", 00:13:07.642 "is_configured": true, 00:13:07.642 "data_offset": 2048, 00:13:07.642 "data_size": 63488 00:13:07.642 }, 00:13:07.642 { 00:13:07.642 "name": "BaseBdev3", 00:13:07.642 "uuid": "1255c812-24f7-52e0-b5fe-48d66b35feb8", 00:13:07.642 "is_configured": true, 00:13:07.642 "data_offset": 2048, 00:13:07.642 "data_size": 63488 00:13:07.642 }, 00:13:07.642 { 00:13:07.642 "name": "BaseBdev4", 00:13:07.642 "uuid": "a650f65e-1e69-59aa-bf19-dc5271137ef3", 00:13:07.642 "is_configured": true, 00:13:07.642 "data_offset": 2048, 00:13:07.642 "data_size": 63488 00:13:07.642 } 00:13:07.642 ] 00:13:07.642 }' 00:13:07.642 04:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.642 04:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.211 04:04:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:08.211 04:04:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:08.211 [2024-12-06 04:04:01.404871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:09.176 04:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:09.176 04:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.176 04:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.176 04:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.176 04:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:09.176 04:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:09.176 04:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:09.176 04:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:09.176 04:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.176 04:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.176 04:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:09.176 04:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.176 04:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.176 04:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.176 04:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.176 04:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.176 04:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.176 04:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.176 04:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.176 04:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.176 04:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.176 04:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.176 04:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.176 "name": "raid_bdev1", 00:13:09.176 "uuid": "b4eec2ea-9064-4545-816c-d3aba1199d02", 00:13:09.176 "strip_size_kb": 64, 00:13:09.176 "state": "online", 00:13:09.176 "raid_level": "concat", 00:13:09.176 "superblock": true, 00:13:09.176 "num_base_bdevs": 4, 00:13:09.176 "num_base_bdevs_discovered": 4, 00:13:09.176 "num_base_bdevs_operational": 4, 00:13:09.176 "base_bdevs_list": [ 00:13:09.176 { 00:13:09.176 "name": "BaseBdev1", 00:13:09.176 "uuid": "bfb93a62-80cf-5185-8f34-0bdc99e3feb9", 00:13:09.176 "is_configured": true, 00:13:09.176 "data_offset": 2048, 00:13:09.176 "data_size": 63488 00:13:09.176 }, 00:13:09.176 { 00:13:09.176 "name": "BaseBdev2", 00:13:09.176 "uuid": "398ef0d8-1348-541b-98b9-41fb58eda4fc", 00:13:09.176 "is_configured": true, 00:13:09.176 "data_offset": 2048, 00:13:09.176 "data_size": 63488 00:13:09.176 }, 00:13:09.176 { 00:13:09.176 "name": "BaseBdev3", 00:13:09.176 "uuid": "1255c812-24f7-52e0-b5fe-48d66b35feb8", 00:13:09.176 "is_configured": true, 00:13:09.176 "data_offset": 2048, 00:13:09.176 "data_size": 63488 00:13:09.176 }, 00:13:09.176 { 00:13:09.177 "name": "BaseBdev4", 00:13:09.177 "uuid": "a650f65e-1e69-59aa-bf19-dc5271137ef3", 00:13:09.177 "is_configured": true, 00:13:09.177 "data_offset": 2048, 00:13:09.177 "data_size": 63488 00:13:09.177 } 00:13:09.177 ] 00:13:09.177 }' 00:13:09.177 04:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.177 04:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.744 04:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:09.744 04:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.744 04:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.744 [2024-12-06 04:04:02.810469] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:09.744 [2024-12-06 04:04:02.810511] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:09.744 [2024-12-06 04:04:02.813879] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.744 [2024-12-06 04:04:02.813959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.744 [2024-12-06 04:04:02.814009] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:09.744 [2024-12-06 04:04:02.814023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:09.744 { 00:13:09.744 "results": [ 00:13:09.744 { 00:13:09.744 "job": "raid_bdev1", 00:13:09.744 "core_mask": "0x1", 00:13:09.744 "workload": "randrw", 00:13:09.744 "percentage": 50, 00:13:09.744 "status": "finished", 00:13:09.744 "queue_depth": 1, 00:13:09.744 "io_size": 131072, 00:13:09.744 "runtime": 1.406162, 00:13:09.744 "iops": 12266.723179832765, 00:13:09.744 "mibps": 1533.3403974790956, 00:13:09.744 "io_failed": 1, 00:13:09.744 "io_timeout": 0, 00:13:09.744 "avg_latency_us": 112.61512096702741, 00:13:09.744 "min_latency_us": 32.19563318777293, 00:13:09.744 "max_latency_us": 1781.4917030567685 00:13:09.744 } 00:13:09.744 ], 00:13:09.744 "core_count": 1 00:13:09.744 } 00:13:09.744 04:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.744 04:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73143 00:13:09.744 04:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73143 ']' 00:13:09.744 04:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73143 00:13:09.744 04:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:09.744 04:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.744 04:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73143 00:13:09.744 04:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:09.744 04:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:09.744 killing process with pid 73143 00:13:09.744 04:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73143' 00:13:09.744 04:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73143 00:13:09.744 [2024-12-06 04:04:02.863318] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:09.744 04:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73143 00:13:10.003 [2024-12-06 04:04:03.249912] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:11.383 04:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:11.383 04:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.KQjod6lTGv 00:13:11.383 04:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:11.383 04:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:11.383 04:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:11.383 04:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:11.383 04:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:11.383 04:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:11.383 00:13:11.383 real 0m5.191s 00:13:11.383 user 0m6.139s 00:13:11.383 sys 0m0.670s 00:13:11.383 04:04:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.383 04:04:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.383 ************************************ 00:13:11.384 END TEST raid_write_error_test 00:13:11.384 ************************************ 00:13:11.384 04:04:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:11.384 04:04:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:13:11.384 04:04:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:11.384 04:04:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.384 04:04:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:11.644 ************************************ 00:13:11.644 START TEST raid_state_function_test 00:13:11.644 ************************************ 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:11.644 Process raid pid: 73292 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73292 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73292' 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73292 00:13:11.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73292 ']' 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.644 04:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.644 [2024-12-06 04:04:04.854084] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:13:11.644 [2024-12-06 04:04:04.854240] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.909 [2024-12-06 04:04:05.036527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.909 [2024-12-06 04:04:05.206091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.169 [2024-12-06 04:04:05.494099] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.169 [2024-12-06 04:04:05.494170] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.430 04:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:12.430 04:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:12.430 04:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:12.430 04:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.430 04:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.690 [2024-12-06 04:04:05.790316] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:12.690 [2024-12-06 04:04:05.790402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:12.690 [2024-12-06 04:04:05.790415] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:12.690 [2024-12-06 04:04:05.790427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:12.690 [2024-12-06 04:04:05.790436] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:12.690 [2024-12-06 04:04:05.790448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:12.690 [2024-12-06 04:04:05.790462] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:12.690 [2024-12-06 04:04:05.790474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:12.690 04:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.690 04:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:12.690 04:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.690 04:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.690 04:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.690 04:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.690 04:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:12.690 04:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.690 04:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.690 04:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.690 04:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.690 04:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.690 04:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.690 04:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.690 04:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.690 04:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.690 04:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.690 "name": "Existed_Raid", 00:13:12.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.690 "strip_size_kb": 0, 00:13:12.690 "state": "configuring", 00:13:12.690 "raid_level": "raid1", 00:13:12.690 "superblock": false, 00:13:12.690 "num_base_bdevs": 4, 00:13:12.690 "num_base_bdevs_discovered": 0, 00:13:12.690 "num_base_bdevs_operational": 4, 00:13:12.690 "base_bdevs_list": [ 00:13:12.690 { 00:13:12.690 "name": "BaseBdev1", 00:13:12.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.690 "is_configured": false, 00:13:12.690 "data_offset": 0, 00:13:12.690 "data_size": 0 00:13:12.690 }, 00:13:12.690 { 00:13:12.690 "name": "BaseBdev2", 00:13:12.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.690 "is_configured": false, 00:13:12.690 "data_offset": 0, 00:13:12.690 "data_size": 0 00:13:12.690 }, 00:13:12.690 { 00:13:12.690 "name": "BaseBdev3", 00:13:12.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.690 "is_configured": false, 00:13:12.690 "data_offset": 0, 00:13:12.690 "data_size": 0 00:13:12.690 }, 00:13:12.690 { 00:13:12.690 "name": "BaseBdev4", 00:13:12.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.690 "is_configured": false, 00:13:12.690 "data_offset": 0, 00:13:12.690 "data_size": 0 00:13:12.690 } 00:13:12.690 ] 00:13:12.690 }' 00:13:12.690 04:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.690 04:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.951 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:12.951 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.951 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.951 [2024-12-06 04:04:06.265587] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:12.951 [2024-12-06 04:04:06.265661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:12.951 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.951 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:12.951 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.951 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.951 [2024-12-06 04:04:06.277585] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:12.951 [2024-12-06 04:04:06.277683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:12.951 [2024-12-06 04:04:06.277700] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:12.951 [2024-12-06 04:04:06.277714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:12.951 [2024-12-06 04:04:06.277722] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:12.951 [2024-12-06 04:04:06.277734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:12.951 [2024-12-06 04:04:06.277742] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:12.951 [2024-12-06 04:04:06.277753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:12.951 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.951 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:12.951 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.951 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.211 [2024-12-06 04:04:06.340118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:13.211 BaseBdev1 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.212 [ 00:13:13.212 { 00:13:13.212 "name": "BaseBdev1", 00:13:13.212 "aliases": [ 00:13:13.212 "4a3cd8f6-1860-4524-820c-628a85ed70d1" 00:13:13.212 ], 00:13:13.212 "product_name": "Malloc disk", 00:13:13.212 "block_size": 512, 00:13:13.212 "num_blocks": 65536, 00:13:13.212 "uuid": "4a3cd8f6-1860-4524-820c-628a85ed70d1", 00:13:13.212 "assigned_rate_limits": { 00:13:13.212 "rw_ios_per_sec": 0, 00:13:13.212 "rw_mbytes_per_sec": 0, 00:13:13.212 "r_mbytes_per_sec": 0, 00:13:13.212 "w_mbytes_per_sec": 0 00:13:13.212 }, 00:13:13.212 "claimed": true, 00:13:13.212 "claim_type": "exclusive_write", 00:13:13.212 "zoned": false, 00:13:13.212 "supported_io_types": { 00:13:13.212 "read": true, 00:13:13.212 "write": true, 00:13:13.212 "unmap": true, 00:13:13.212 "flush": true, 00:13:13.212 "reset": true, 00:13:13.212 "nvme_admin": false, 00:13:13.212 "nvme_io": false, 00:13:13.212 "nvme_io_md": false, 00:13:13.212 "write_zeroes": true, 00:13:13.212 "zcopy": true, 00:13:13.212 "get_zone_info": false, 00:13:13.212 "zone_management": false, 00:13:13.212 "zone_append": false, 00:13:13.212 "compare": false, 00:13:13.212 "compare_and_write": false, 00:13:13.212 "abort": true, 00:13:13.212 "seek_hole": false, 00:13:13.212 "seek_data": false, 00:13:13.212 "copy": true, 00:13:13.212 "nvme_iov_md": false 00:13:13.212 }, 00:13:13.212 "memory_domains": [ 00:13:13.212 { 00:13:13.212 "dma_device_id": "system", 00:13:13.212 "dma_device_type": 1 00:13:13.212 }, 00:13:13.212 { 00:13:13.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.212 "dma_device_type": 2 00:13:13.212 } 00:13:13.212 ], 00:13:13.212 "driver_specific": {} 00:13:13.212 } 00:13:13.212 ] 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.212 "name": "Existed_Raid", 00:13:13.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.212 "strip_size_kb": 0, 00:13:13.212 "state": "configuring", 00:13:13.212 "raid_level": "raid1", 00:13:13.212 "superblock": false, 00:13:13.212 "num_base_bdevs": 4, 00:13:13.212 "num_base_bdevs_discovered": 1, 00:13:13.212 "num_base_bdevs_operational": 4, 00:13:13.212 "base_bdevs_list": [ 00:13:13.212 { 00:13:13.212 "name": "BaseBdev1", 00:13:13.212 "uuid": "4a3cd8f6-1860-4524-820c-628a85ed70d1", 00:13:13.212 "is_configured": true, 00:13:13.212 "data_offset": 0, 00:13:13.212 "data_size": 65536 00:13:13.212 }, 00:13:13.212 { 00:13:13.212 "name": "BaseBdev2", 00:13:13.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.212 "is_configured": false, 00:13:13.212 "data_offset": 0, 00:13:13.212 "data_size": 0 00:13:13.212 }, 00:13:13.212 { 00:13:13.212 "name": "BaseBdev3", 00:13:13.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.212 "is_configured": false, 00:13:13.212 "data_offset": 0, 00:13:13.212 "data_size": 0 00:13:13.212 }, 00:13:13.212 { 00:13:13.212 "name": "BaseBdev4", 00:13:13.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.212 "is_configured": false, 00:13:13.212 "data_offset": 0, 00:13:13.212 "data_size": 0 00:13:13.212 } 00:13:13.212 ] 00:13:13.212 }' 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.212 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.781 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:13.781 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.781 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.782 [2024-12-06 04:04:06.883278] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:13.782 [2024-12-06 04:04:06.883363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.782 [2024-12-06 04:04:06.895320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:13.782 [2024-12-06 04:04:06.897942] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:13.782 [2024-12-06 04:04:06.898000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:13.782 [2024-12-06 04:04:06.898012] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:13.782 [2024-12-06 04:04:06.898025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:13.782 [2024-12-06 04:04:06.898034] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:13.782 [2024-12-06 04:04:06.898059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.782 "name": "Existed_Raid", 00:13:13.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.782 "strip_size_kb": 0, 00:13:13.782 "state": "configuring", 00:13:13.782 "raid_level": "raid1", 00:13:13.782 "superblock": false, 00:13:13.782 "num_base_bdevs": 4, 00:13:13.782 "num_base_bdevs_discovered": 1, 00:13:13.782 "num_base_bdevs_operational": 4, 00:13:13.782 "base_bdevs_list": [ 00:13:13.782 { 00:13:13.782 "name": "BaseBdev1", 00:13:13.782 "uuid": "4a3cd8f6-1860-4524-820c-628a85ed70d1", 00:13:13.782 "is_configured": true, 00:13:13.782 "data_offset": 0, 00:13:13.782 "data_size": 65536 00:13:13.782 }, 00:13:13.782 { 00:13:13.782 "name": "BaseBdev2", 00:13:13.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.782 "is_configured": false, 00:13:13.782 "data_offset": 0, 00:13:13.782 "data_size": 0 00:13:13.782 }, 00:13:13.782 { 00:13:13.782 "name": "BaseBdev3", 00:13:13.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.782 "is_configured": false, 00:13:13.782 "data_offset": 0, 00:13:13.782 "data_size": 0 00:13:13.782 }, 00:13:13.782 { 00:13:13.782 "name": "BaseBdev4", 00:13:13.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.782 "is_configured": false, 00:13:13.782 "data_offset": 0, 00:13:13.782 "data_size": 0 00:13:13.782 } 00:13:13.782 ] 00:13:13.782 }' 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.782 04:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.042 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:14.042 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.042 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.042 [2024-12-06 04:04:07.391932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:14.042 BaseBdev2 00:13:14.042 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.042 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:14.042 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:14.042 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:14.042 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:14.042 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.301 [ 00:13:14.301 { 00:13:14.301 "name": "BaseBdev2", 00:13:14.301 "aliases": [ 00:13:14.301 "0b8c2578-3a2a-4f8f-a05b-b3ec55bd91c5" 00:13:14.301 ], 00:13:14.301 "product_name": "Malloc disk", 00:13:14.301 "block_size": 512, 00:13:14.301 "num_blocks": 65536, 00:13:14.301 "uuid": "0b8c2578-3a2a-4f8f-a05b-b3ec55bd91c5", 00:13:14.301 "assigned_rate_limits": { 00:13:14.301 "rw_ios_per_sec": 0, 00:13:14.301 "rw_mbytes_per_sec": 0, 00:13:14.301 "r_mbytes_per_sec": 0, 00:13:14.301 "w_mbytes_per_sec": 0 00:13:14.301 }, 00:13:14.301 "claimed": true, 00:13:14.301 "claim_type": "exclusive_write", 00:13:14.301 "zoned": false, 00:13:14.301 "supported_io_types": { 00:13:14.301 "read": true, 00:13:14.301 "write": true, 00:13:14.301 "unmap": true, 00:13:14.301 "flush": true, 00:13:14.301 "reset": true, 00:13:14.301 "nvme_admin": false, 00:13:14.301 "nvme_io": false, 00:13:14.301 "nvme_io_md": false, 00:13:14.301 "write_zeroes": true, 00:13:14.301 "zcopy": true, 00:13:14.301 "get_zone_info": false, 00:13:14.301 "zone_management": false, 00:13:14.301 "zone_append": false, 00:13:14.301 "compare": false, 00:13:14.301 "compare_and_write": false, 00:13:14.301 "abort": true, 00:13:14.301 "seek_hole": false, 00:13:14.301 "seek_data": false, 00:13:14.301 "copy": true, 00:13:14.301 "nvme_iov_md": false 00:13:14.301 }, 00:13:14.301 "memory_domains": [ 00:13:14.301 { 00:13:14.301 "dma_device_id": "system", 00:13:14.301 "dma_device_type": 1 00:13:14.301 }, 00:13:14.301 { 00:13:14.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.301 "dma_device_type": 2 00:13:14.301 } 00:13:14.301 ], 00:13:14.301 "driver_specific": {} 00:13:14.301 } 00:13:14.301 ] 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.301 "name": "Existed_Raid", 00:13:14.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.301 "strip_size_kb": 0, 00:13:14.301 "state": "configuring", 00:13:14.301 "raid_level": "raid1", 00:13:14.301 "superblock": false, 00:13:14.301 "num_base_bdevs": 4, 00:13:14.301 "num_base_bdevs_discovered": 2, 00:13:14.301 "num_base_bdevs_operational": 4, 00:13:14.301 "base_bdevs_list": [ 00:13:14.301 { 00:13:14.301 "name": "BaseBdev1", 00:13:14.301 "uuid": "4a3cd8f6-1860-4524-820c-628a85ed70d1", 00:13:14.301 "is_configured": true, 00:13:14.301 "data_offset": 0, 00:13:14.301 "data_size": 65536 00:13:14.301 }, 00:13:14.301 { 00:13:14.301 "name": "BaseBdev2", 00:13:14.301 "uuid": "0b8c2578-3a2a-4f8f-a05b-b3ec55bd91c5", 00:13:14.301 "is_configured": true, 00:13:14.301 "data_offset": 0, 00:13:14.301 "data_size": 65536 00:13:14.301 }, 00:13:14.301 { 00:13:14.301 "name": "BaseBdev3", 00:13:14.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.301 "is_configured": false, 00:13:14.301 "data_offset": 0, 00:13:14.301 "data_size": 0 00:13:14.301 }, 00:13:14.301 { 00:13:14.301 "name": "BaseBdev4", 00:13:14.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.301 "is_configured": false, 00:13:14.301 "data_offset": 0, 00:13:14.301 "data_size": 0 00:13:14.301 } 00:13:14.301 ] 00:13:14.301 }' 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.301 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.563 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:14.563 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.563 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.821 [2024-12-06 04:04:07.950082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:14.821 BaseBdev3 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.821 [ 00:13:14.821 { 00:13:14.821 "name": "BaseBdev3", 00:13:14.821 "aliases": [ 00:13:14.821 "e9bcfa22-01f3-4c85-b6bd-8fa1e3176d28" 00:13:14.821 ], 00:13:14.821 "product_name": "Malloc disk", 00:13:14.821 "block_size": 512, 00:13:14.821 "num_blocks": 65536, 00:13:14.821 "uuid": "e9bcfa22-01f3-4c85-b6bd-8fa1e3176d28", 00:13:14.821 "assigned_rate_limits": { 00:13:14.821 "rw_ios_per_sec": 0, 00:13:14.821 "rw_mbytes_per_sec": 0, 00:13:14.821 "r_mbytes_per_sec": 0, 00:13:14.821 "w_mbytes_per_sec": 0 00:13:14.821 }, 00:13:14.821 "claimed": true, 00:13:14.821 "claim_type": "exclusive_write", 00:13:14.821 "zoned": false, 00:13:14.821 "supported_io_types": { 00:13:14.821 "read": true, 00:13:14.821 "write": true, 00:13:14.821 "unmap": true, 00:13:14.821 "flush": true, 00:13:14.821 "reset": true, 00:13:14.821 "nvme_admin": false, 00:13:14.821 "nvme_io": false, 00:13:14.821 "nvme_io_md": false, 00:13:14.821 "write_zeroes": true, 00:13:14.821 "zcopy": true, 00:13:14.821 "get_zone_info": false, 00:13:14.821 "zone_management": false, 00:13:14.821 "zone_append": false, 00:13:14.821 "compare": false, 00:13:14.821 "compare_and_write": false, 00:13:14.821 "abort": true, 00:13:14.821 "seek_hole": false, 00:13:14.821 "seek_data": false, 00:13:14.821 "copy": true, 00:13:14.821 "nvme_iov_md": false 00:13:14.821 }, 00:13:14.821 "memory_domains": [ 00:13:14.821 { 00:13:14.821 "dma_device_id": "system", 00:13:14.821 "dma_device_type": 1 00:13:14.821 }, 00:13:14.821 { 00:13:14.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.821 "dma_device_type": 2 00:13:14.821 } 00:13:14.821 ], 00:13:14.821 "driver_specific": {} 00:13:14.821 } 00:13:14.821 ] 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.821 04:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.821 04:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.821 04:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.821 "name": "Existed_Raid", 00:13:14.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.821 "strip_size_kb": 0, 00:13:14.821 "state": "configuring", 00:13:14.821 "raid_level": "raid1", 00:13:14.821 "superblock": false, 00:13:14.821 "num_base_bdevs": 4, 00:13:14.821 "num_base_bdevs_discovered": 3, 00:13:14.821 "num_base_bdevs_operational": 4, 00:13:14.821 "base_bdevs_list": [ 00:13:14.821 { 00:13:14.821 "name": "BaseBdev1", 00:13:14.821 "uuid": "4a3cd8f6-1860-4524-820c-628a85ed70d1", 00:13:14.821 "is_configured": true, 00:13:14.821 "data_offset": 0, 00:13:14.821 "data_size": 65536 00:13:14.821 }, 00:13:14.821 { 00:13:14.821 "name": "BaseBdev2", 00:13:14.821 "uuid": "0b8c2578-3a2a-4f8f-a05b-b3ec55bd91c5", 00:13:14.821 "is_configured": true, 00:13:14.821 "data_offset": 0, 00:13:14.821 "data_size": 65536 00:13:14.821 }, 00:13:14.821 { 00:13:14.821 "name": "BaseBdev3", 00:13:14.822 "uuid": "e9bcfa22-01f3-4c85-b6bd-8fa1e3176d28", 00:13:14.822 "is_configured": true, 00:13:14.822 "data_offset": 0, 00:13:14.822 "data_size": 65536 00:13:14.822 }, 00:13:14.822 { 00:13:14.822 "name": "BaseBdev4", 00:13:14.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.822 "is_configured": false, 00:13:14.822 "data_offset": 0, 00:13:14.822 "data_size": 0 00:13:14.822 } 00:13:14.822 ] 00:13:14.822 }' 00:13:14.822 04:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.822 04:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.389 04:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:15.389 04:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.389 04:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.389 [2024-12-06 04:04:08.555533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:15.389 [2024-12-06 04:04:08.555628] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:15.389 [2024-12-06 04:04:08.555639] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:15.389 [2024-12-06 04:04:08.556034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:15.389 [2024-12-06 04:04:08.556301] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:15.389 [2024-12-06 04:04:08.556330] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:15.389 [2024-12-06 04:04:08.556750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.389 BaseBdev4 00:13:15.389 04:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.389 04:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:15.389 04:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:15.389 04:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:15.389 04:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:15.389 04:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:15.389 04:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:15.389 04:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.390 [ 00:13:15.390 { 00:13:15.390 "name": "BaseBdev4", 00:13:15.390 "aliases": [ 00:13:15.390 "6d1f1212-0625-4add-9058-a831b6696a3d" 00:13:15.390 ], 00:13:15.390 "product_name": "Malloc disk", 00:13:15.390 "block_size": 512, 00:13:15.390 "num_blocks": 65536, 00:13:15.390 "uuid": "6d1f1212-0625-4add-9058-a831b6696a3d", 00:13:15.390 "assigned_rate_limits": { 00:13:15.390 "rw_ios_per_sec": 0, 00:13:15.390 "rw_mbytes_per_sec": 0, 00:13:15.390 "r_mbytes_per_sec": 0, 00:13:15.390 "w_mbytes_per_sec": 0 00:13:15.390 }, 00:13:15.390 "claimed": true, 00:13:15.390 "claim_type": "exclusive_write", 00:13:15.390 "zoned": false, 00:13:15.390 "supported_io_types": { 00:13:15.390 "read": true, 00:13:15.390 "write": true, 00:13:15.390 "unmap": true, 00:13:15.390 "flush": true, 00:13:15.390 "reset": true, 00:13:15.390 "nvme_admin": false, 00:13:15.390 "nvme_io": false, 00:13:15.390 "nvme_io_md": false, 00:13:15.390 "write_zeroes": true, 00:13:15.390 "zcopy": true, 00:13:15.390 "get_zone_info": false, 00:13:15.390 "zone_management": false, 00:13:15.390 "zone_append": false, 00:13:15.390 "compare": false, 00:13:15.390 "compare_and_write": false, 00:13:15.390 "abort": true, 00:13:15.390 "seek_hole": false, 00:13:15.390 "seek_data": false, 00:13:15.390 "copy": true, 00:13:15.390 "nvme_iov_md": false 00:13:15.390 }, 00:13:15.390 "memory_domains": [ 00:13:15.390 { 00:13:15.390 "dma_device_id": "system", 00:13:15.390 "dma_device_type": 1 00:13:15.390 }, 00:13:15.390 { 00:13:15.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.390 "dma_device_type": 2 00:13:15.390 } 00:13:15.390 ], 00:13:15.390 "driver_specific": {} 00:13:15.390 } 00:13:15.390 ] 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.390 "name": "Existed_Raid", 00:13:15.390 "uuid": "dba304c6-c2fc-47e0-943c-4b748ab2f654", 00:13:15.390 "strip_size_kb": 0, 00:13:15.390 "state": "online", 00:13:15.390 "raid_level": "raid1", 00:13:15.390 "superblock": false, 00:13:15.390 "num_base_bdevs": 4, 00:13:15.390 "num_base_bdevs_discovered": 4, 00:13:15.390 "num_base_bdevs_operational": 4, 00:13:15.390 "base_bdevs_list": [ 00:13:15.390 { 00:13:15.390 "name": "BaseBdev1", 00:13:15.390 "uuid": "4a3cd8f6-1860-4524-820c-628a85ed70d1", 00:13:15.390 "is_configured": true, 00:13:15.390 "data_offset": 0, 00:13:15.390 "data_size": 65536 00:13:15.390 }, 00:13:15.390 { 00:13:15.390 "name": "BaseBdev2", 00:13:15.390 "uuid": "0b8c2578-3a2a-4f8f-a05b-b3ec55bd91c5", 00:13:15.390 "is_configured": true, 00:13:15.390 "data_offset": 0, 00:13:15.390 "data_size": 65536 00:13:15.390 }, 00:13:15.390 { 00:13:15.390 "name": "BaseBdev3", 00:13:15.390 "uuid": "e9bcfa22-01f3-4c85-b6bd-8fa1e3176d28", 00:13:15.390 "is_configured": true, 00:13:15.390 "data_offset": 0, 00:13:15.390 "data_size": 65536 00:13:15.390 }, 00:13:15.390 { 00:13:15.390 "name": "BaseBdev4", 00:13:15.390 "uuid": "6d1f1212-0625-4add-9058-a831b6696a3d", 00:13:15.390 "is_configured": true, 00:13:15.390 "data_offset": 0, 00:13:15.390 "data_size": 65536 00:13:15.390 } 00:13:15.390 ] 00:13:15.390 }' 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.390 04:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:15.958 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:15.958 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:15.958 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:15.958 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:15.958 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:15.958 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:15.958 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:15.958 04:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.958 04:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.958 [2024-12-06 04:04:09.123523] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:15.958 04:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.958 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:15.958 "name": "Existed_Raid", 00:13:15.958 "aliases": [ 00:13:15.958 "dba304c6-c2fc-47e0-943c-4b748ab2f654" 00:13:15.958 ], 00:13:15.958 "product_name": "Raid Volume", 00:13:15.958 "block_size": 512, 00:13:15.958 "num_blocks": 65536, 00:13:15.958 "uuid": "dba304c6-c2fc-47e0-943c-4b748ab2f654", 00:13:15.958 "assigned_rate_limits": { 00:13:15.958 "rw_ios_per_sec": 0, 00:13:15.958 "rw_mbytes_per_sec": 0, 00:13:15.958 "r_mbytes_per_sec": 0, 00:13:15.958 "w_mbytes_per_sec": 0 00:13:15.958 }, 00:13:15.958 "claimed": false, 00:13:15.958 "zoned": false, 00:13:15.958 "supported_io_types": { 00:13:15.958 "read": true, 00:13:15.958 "write": true, 00:13:15.958 "unmap": false, 00:13:15.958 "flush": false, 00:13:15.958 "reset": true, 00:13:15.958 "nvme_admin": false, 00:13:15.958 "nvme_io": false, 00:13:15.958 "nvme_io_md": false, 00:13:15.958 "write_zeroes": true, 00:13:15.958 "zcopy": false, 00:13:15.958 "get_zone_info": false, 00:13:15.958 "zone_management": false, 00:13:15.959 "zone_append": false, 00:13:15.959 "compare": false, 00:13:15.959 "compare_and_write": false, 00:13:15.959 "abort": false, 00:13:15.959 "seek_hole": false, 00:13:15.959 "seek_data": false, 00:13:15.959 "copy": false, 00:13:15.959 "nvme_iov_md": false 00:13:15.959 }, 00:13:15.959 "memory_domains": [ 00:13:15.959 { 00:13:15.959 "dma_device_id": "system", 00:13:15.959 "dma_device_type": 1 00:13:15.959 }, 00:13:15.959 { 00:13:15.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.959 "dma_device_type": 2 00:13:15.959 }, 00:13:15.959 { 00:13:15.959 "dma_device_id": "system", 00:13:15.959 "dma_device_type": 1 00:13:15.959 }, 00:13:15.959 { 00:13:15.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.959 "dma_device_type": 2 00:13:15.959 }, 00:13:15.959 { 00:13:15.959 "dma_device_id": "system", 00:13:15.959 "dma_device_type": 1 00:13:15.959 }, 00:13:15.959 { 00:13:15.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.959 "dma_device_type": 2 00:13:15.959 }, 00:13:15.959 { 00:13:15.959 "dma_device_id": "system", 00:13:15.959 "dma_device_type": 1 00:13:15.959 }, 00:13:15.959 { 00:13:15.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.959 "dma_device_type": 2 00:13:15.959 } 00:13:15.959 ], 00:13:15.959 "driver_specific": { 00:13:15.959 "raid": { 00:13:15.959 "uuid": "dba304c6-c2fc-47e0-943c-4b748ab2f654", 00:13:15.959 "strip_size_kb": 0, 00:13:15.959 "state": "online", 00:13:15.959 "raid_level": "raid1", 00:13:15.959 "superblock": false, 00:13:15.959 "num_base_bdevs": 4, 00:13:15.959 "num_base_bdevs_discovered": 4, 00:13:15.959 "num_base_bdevs_operational": 4, 00:13:15.959 "base_bdevs_list": [ 00:13:15.959 { 00:13:15.959 "name": "BaseBdev1", 00:13:15.959 "uuid": "4a3cd8f6-1860-4524-820c-628a85ed70d1", 00:13:15.959 "is_configured": true, 00:13:15.959 "data_offset": 0, 00:13:15.959 "data_size": 65536 00:13:15.959 }, 00:13:15.959 { 00:13:15.959 "name": "BaseBdev2", 00:13:15.959 "uuid": "0b8c2578-3a2a-4f8f-a05b-b3ec55bd91c5", 00:13:15.959 "is_configured": true, 00:13:15.959 "data_offset": 0, 00:13:15.959 "data_size": 65536 00:13:15.959 }, 00:13:15.959 { 00:13:15.959 "name": "BaseBdev3", 00:13:15.959 "uuid": "e9bcfa22-01f3-4c85-b6bd-8fa1e3176d28", 00:13:15.959 "is_configured": true, 00:13:15.959 "data_offset": 0, 00:13:15.959 "data_size": 65536 00:13:15.959 }, 00:13:15.959 { 00:13:15.959 "name": "BaseBdev4", 00:13:15.959 "uuid": "6d1f1212-0625-4add-9058-a831b6696a3d", 00:13:15.959 "is_configured": true, 00:13:15.959 "data_offset": 0, 00:13:15.959 "data_size": 65536 00:13:15.959 } 00:13:15.959 ] 00:13:15.959 } 00:13:15.959 } 00:13:15.959 }' 00:13:15.959 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:15.959 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:15.959 BaseBdev2 00:13:15.959 BaseBdev3 00:13:15.959 BaseBdev4' 00:13:15.959 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:15.959 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:15.959 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:15.959 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:15.959 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:15.959 04:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.959 04:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.959 04:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.220 04:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.220 [2024-12-06 04:04:09.494565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.480 "name": "Existed_Raid", 00:13:16.480 "uuid": "dba304c6-c2fc-47e0-943c-4b748ab2f654", 00:13:16.480 "strip_size_kb": 0, 00:13:16.480 "state": "online", 00:13:16.480 "raid_level": "raid1", 00:13:16.480 "superblock": false, 00:13:16.480 "num_base_bdevs": 4, 00:13:16.480 "num_base_bdevs_discovered": 3, 00:13:16.480 "num_base_bdevs_operational": 3, 00:13:16.480 "base_bdevs_list": [ 00:13:16.480 { 00:13:16.480 "name": null, 00:13:16.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.480 "is_configured": false, 00:13:16.480 "data_offset": 0, 00:13:16.480 "data_size": 65536 00:13:16.480 }, 00:13:16.480 { 00:13:16.480 "name": "BaseBdev2", 00:13:16.480 "uuid": "0b8c2578-3a2a-4f8f-a05b-b3ec55bd91c5", 00:13:16.480 "is_configured": true, 00:13:16.480 "data_offset": 0, 00:13:16.480 "data_size": 65536 00:13:16.480 }, 00:13:16.480 { 00:13:16.480 "name": "BaseBdev3", 00:13:16.480 "uuid": "e9bcfa22-01f3-4c85-b6bd-8fa1e3176d28", 00:13:16.480 "is_configured": true, 00:13:16.480 "data_offset": 0, 00:13:16.480 "data_size": 65536 00:13:16.480 }, 00:13:16.480 { 00:13:16.480 "name": "BaseBdev4", 00:13:16.480 "uuid": "6d1f1212-0625-4add-9058-a831b6696a3d", 00:13:16.480 "is_configured": true, 00:13:16.480 "data_offset": 0, 00:13:16.480 "data_size": 65536 00:13:16.480 } 00:13:16.480 ] 00:13:16.480 }' 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.480 04:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.051 [2024-12-06 04:04:10.174225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.051 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.051 [2024-12-06 04:04:10.360278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:17.311 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.311 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:17.311 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:17.311 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:17.311 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.311 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.311 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.311 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.311 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:17.311 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:17.311 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:17.311 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.311 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.311 [2024-12-06 04:04:10.544207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:17.311 [2024-12-06 04:04:10.544360] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:17.570 [2024-12-06 04:04:10.672551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:17.570 [2024-12-06 04:04:10.672629] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:17.570 [2024-12-06 04:04:10.672645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:17.570 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.570 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:17.570 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:17.570 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:17.570 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.570 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.570 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.570 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.570 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.571 BaseBdev2 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.571 [ 00:13:17.571 { 00:13:17.571 "name": "BaseBdev2", 00:13:17.571 "aliases": [ 00:13:17.571 "9f01fcfa-ca78-47cc-83b9-be892c4d0423" 00:13:17.571 ], 00:13:17.571 "product_name": "Malloc disk", 00:13:17.571 "block_size": 512, 00:13:17.571 "num_blocks": 65536, 00:13:17.571 "uuid": "9f01fcfa-ca78-47cc-83b9-be892c4d0423", 00:13:17.571 "assigned_rate_limits": { 00:13:17.571 "rw_ios_per_sec": 0, 00:13:17.571 "rw_mbytes_per_sec": 0, 00:13:17.571 "r_mbytes_per_sec": 0, 00:13:17.571 "w_mbytes_per_sec": 0 00:13:17.571 }, 00:13:17.571 "claimed": false, 00:13:17.571 "zoned": false, 00:13:17.571 "supported_io_types": { 00:13:17.571 "read": true, 00:13:17.571 "write": true, 00:13:17.571 "unmap": true, 00:13:17.571 "flush": true, 00:13:17.571 "reset": true, 00:13:17.571 "nvme_admin": false, 00:13:17.571 "nvme_io": false, 00:13:17.571 "nvme_io_md": false, 00:13:17.571 "write_zeroes": true, 00:13:17.571 "zcopy": true, 00:13:17.571 "get_zone_info": false, 00:13:17.571 "zone_management": false, 00:13:17.571 "zone_append": false, 00:13:17.571 "compare": false, 00:13:17.571 "compare_and_write": false, 00:13:17.571 "abort": true, 00:13:17.571 "seek_hole": false, 00:13:17.571 "seek_data": false, 00:13:17.571 "copy": true, 00:13:17.571 "nvme_iov_md": false 00:13:17.571 }, 00:13:17.571 "memory_domains": [ 00:13:17.571 { 00:13:17.571 "dma_device_id": "system", 00:13:17.571 "dma_device_type": 1 00:13:17.571 }, 00:13:17.571 { 00:13:17.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.571 "dma_device_type": 2 00:13:17.571 } 00:13:17.571 ], 00:13:17.571 "driver_specific": {} 00:13:17.571 } 00:13:17.571 ] 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.571 BaseBdev3 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.571 [ 00:13:17.571 { 00:13:17.571 "name": "BaseBdev3", 00:13:17.571 "aliases": [ 00:13:17.571 "19031c31-5eab-4aab-9596-0dec95afebd8" 00:13:17.571 ], 00:13:17.571 "product_name": "Malloc disk", 00:13:17.571 "block_size": 512, 00:13:17.571 "num_blocks": 65536, 00:13:17.571 "uuid": "19031c31-5eab-4aab-9596-0dec95afebd8", 00:13:17.571 "assigned_rate_limits": { 00:13:17.571 "rw_ios_per_sec": 0, 00:13:17.571 "rw_mbytes_per_sec": 0, 00:13:17.571 "r_mbytes_per_sec": 0, 00:13:17.571 "w_mbytes_per_sec": 0 00:13:17.571 }, 00:13:17.571 "claimed": false, 00:13:17.571 "zoned": false, 00:13:17.571 "supported_io_types": { 00:13:17.571 "read": true, 00:13:17.571 "write": true, 00:13:17.571 "unmap": true, 00:13:17.571 "flush": true, 00:13:17.571 "reset": true, 00:13:17.571 "nvme_admin": false, 00:13:17.571 "nvme_io": false, 00:13:17.571 "nvme_io_md": false, 00:13:17.571 "write_zeroes": true, 00:13:17.571 "zcopy": true, 00:13:17.571 "get_zone_info": false, 00:13:17.571 "zone_management": false, 00:13:17.571 "zone_append": false, 00:13:17.571 "compare": false, 00:13:17.571 "compare_and_write": false, 00:13:17.571 "abort": true, 00:13:17.571 "seek_hole": false, 00:13:17.571 "seek_data": false, 00:13:17.571 "copy": true, 00:13:17.571 "nvme_iov_md": false 00:13:17.571 }, 00:13:17.571 "memory_domains": [ 00:13:17.571 { 00:13:17.571 "dma_device_id": "system", 00:13:17.571 "dma_device_type": 1 00:13:17.571 }, 00:13:17.571 { 00:13:17.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.571 "dma_device_type": 2 00:13:17.571 } 00:13:17.571 ], 00:13:17.571 "driver_specific": {} 00:13:17.571 } 00:13:17.571 ] 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.571 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.831 BaseBdev4 00:13:17.831 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.831 04:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:17.831 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:17.831 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:17.831 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:17.831 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:17.831 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:17.831 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:17.831 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.831 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.831 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.831 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:17.831 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.831 04:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.831 [ 00:13:17.831 { 00:13:17.831 "name": "BaseBdev4", 00:13:17.831 "aliases": [ 00:13:17.831 "e8463672-50b4-42fb-a6a9-78b9ca724520" 00:13:17.831 ], 00:13:17.831 "product_name": "Malloc disk", 00:13:17.831 "block_size": 512, 00:13:17.831 "num_blocks": 65536, 00:13:17.831 "uuid": "e8463672-50b4-42fb-a6a9-78b9ca724520", 00:13:17.831 "assigned_rate_limits": { 00:13:17.831 "rw_ios_per_sec": 0, 00:13:17.831 "rw_mbytes_per_sec": 0, 00:13:17.831 "r_mbytes_per_sec": 0, 00:13:17.831 "w_mbytes_per_sec": 0 00:13:17.831 }, 00:13:17.831 "claimed": false, 00:13:17.831 "zoned": false, 00:13:17.831 "supported_io_types": { 00:13:17.831 "read": true, 00:13:17.831 "write": true, 00:13:17.831 "unmap": true, 00:13:17.831 "flush": true, 00:13:17.831 "reset": true, 00:13:17.831 "nvme_admin": false, 00:13:17.831 "nvme_io": false, 00:13:17.831 "nvme_io_md": false, 00:13:17.831 "write_zeroes": true, 00:13:17.831 "zcopy": true, 00:13:17.831 "get_zone_info": false, 00:13:17.831 "zone_management": false, 00:13:17.831 "zone_append": false, 00:13:17.831 "compare": false, 00:13:17.831 "compare_and_write": false, 00:13:17.831 "abort": true, 00:13:17.831 "seek_hole": false, 00:13:17.831 "seek_data": false, 00:13:17.831 "copy": true, 00:13:17.831 "nvme_iov_md": false 00:13:17.831 }, 00:13:17.831 "memory_domains": [ 00:13:17.831 { 00:13:17.831 "dma_device_id": "system", 00:13:17.831 "dma_device_type": 1 00:13:17.831 }, 00:13:17.831 { 00:13:17.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.831 "dma_device_type": 2 00:13:17.831 } 00:13:17.831 ], 00:13:17.831 "driver_specific": {} 00:13:17.831 } 00:13:17.831 ] 00:13:17.831 04:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.831 04:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:17.831 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:17.831 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:17.831 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:17.831 04:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.831 04:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.831 [2024-12-06 04:04:11.016041] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:17.831 [2024-12-06 04:04:11.016226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:17.831 [2024-12-06 04:04:11.016299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.831 [2024-12-06 04:04:11.018992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:17.831 [2024-12-06 04:04:11.019130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:17.831 04:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.832 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:17.832 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.832 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.832 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.832 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.832 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.832 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.832 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.832 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.832 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.832 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.832 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.832 04:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.832 04:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.832 04:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.832 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.832 "name": "Existed_Raid", 00:13:17.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.832 "strip_size_kb": 0, 00:13:17.832 "state": "configuring", 00:13:17.832 "raid_level": "raid1", 00:13:17.832 "superblock": false, 00:13:17.832 "num_base_bdevs": 4, 00:13:17.832 "num_base_bdevs_discovered": 3, 00:13:17.832 "num_base_bdevs_operational": 4, 00:13:17.832 "base_bdevs_list": [ 00:13:17.832 { 00:13:17.832 "name": "BaseBdev1", 00:13:17.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.832 "is_configured": false, 00:13:17.832 "data_offset": 0, 00:13:17.832 "data_size": 0 00:13:17.832 }, 00:13:17.832 { 00:13:17.832 "name": "BaseBdev2", 00:13:17.832 "uuid": "9f01fcfa-ca78-47cc-83b9-be892c4d0423", 00:13:17.832 "is_configured": true, 00:13:17.832 "data_offset": 0, 00:13:17.832 "data_size": 65536 00:13:17.832 }, 00:13:17.832 { 00:13:17.832 "name": "BaseBdev3", 00:13:17.832 "uuid": "19031c31-5eab-4aab-9596-0dec95afebd8", 00:13:17.832 "is_configured": true, 00:13:17.832 "data_offset": 0, 00:13:17.832 "data_size": 65536 00:13:17.832 }, 00:13:17.832 { 00:13:17.832 "name": "BaseBdev4", 00:13:17.832 "uuid": "e8463672-50b4-42fb-a6a9-78b9ca724520", 00:13:17.832 "is_configured": true, 00:13:17.832 "data_offset": 0, 00:13:17.832 "data_size": 65536 00:13:17.832 } 00:13:17.832 ] 00:13:17.832 }' 00:13:17.832 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.832 04:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.416 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:18.416 04:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.416 04:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.416 [2024-12-06 04:04:11.491304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:18.416 04:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.416 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:18.416 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.416 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.416 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.416 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.416 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.416 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.416 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.416 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.416 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.416 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.416 04:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.416 04:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.416 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.416 04:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.416 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.416 "name": "Existed_Raid", 00:13:18.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.416 "strip_size_kb": 0, 00:13:18.416 "state": "configuring", 00:13:18.416 "raid_level": "raid1", 00:13:18.416 "superblock": false, 00:13:18.416 "num_base_bdevs": 4, 00:13:18.416 "num_base_bdevs_discovered": 2, 00:13:18.416 "num_base_bdevs_operational": 4, 00:13:18.416 "base_bdevs_list": [ 00:13:18.416 { 00:13:18.416 "name": "BaseBdev1", 00:13:18.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.416 "is_configured": false, 00:13:18.416 "data_offset": 0, 00:13:18.416 "data_size": 0 00:13:18.416 }, 00:13:18.416 { 00:13:18.416 "name": null, 00:13:18.416 "uuid": "9f01fcfa-ca78-47cc-83b9-be892c4d0423", 00:13:18.416 "is_configured": false, 00:13:18.417 "data_offset": 0, 00:13:18.417 "data_size": 65536 00:13:18.417 }, 00:13:18.417 { 00:13:18.417 "name": "BaseBdev3", 00:13:18.417 "uuid": "19031c31-5eab-4aab-9596-0dec95afebd8", 00:13:18.417 "is_configured": true, 00:13:18.417 "data_offset": 0, 00:13:18.417 "data_size": 65536 00:13:18.417 }, 00:13:18.417 { 00:13:18.417 "name": "BaseBdev4", 00:13:18.417 "uuid": "e8463672-50b4-42fb-a6a9-78b9ca724520", 00:13:18.417 "is_configured": true, 00:13:18.417 "data_offset": 0, 00:13:18.417 "data_size": 65536 00:13:18.417 } 00:13:18.417 ] 00:13:18.417 }' 00:13:18.417 04:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.417 04:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.676 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:18.676 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.676 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.676 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.006 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.006 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:19.006 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:19.006 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.006 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.006 [2024-12-06 04:04:12.098541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:19.006 BaseBdev1 00:13:19.006 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.006 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:19.006 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:19.006 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:19.006 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:19.006 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:19.006 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:19.006 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:19.006 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.006 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.006 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.006 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:19.006 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.006 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.006 [ 00:13:19.006 { 00:13:19.006 "name": "BaseBdev1", 00:13:19.006 "aliases": [ 00:13:19.006 "7cacf60a-af4c-49e5-9670-cf2cca3adbee" 00:13:19.006 ], 00:13:19.006 "product_name": "Malloc disk", 00:13:19.006 "block_size": 512, 00:13:19.007 "num_blocks": 65536, 00:13:19.007 "uuid": "7cacf60a-af4c-49e5-9670-cf2cca3adbee", 00:13:19.007 "assigned_rate_limits": { 00:13:19.007 "rw_ios_per_sec": 0, 00:13:19.007 "rw_mbytes_per_sec": 0, 00:13:19.007 "r_mbytes_per_sec": 0, 00:13:19.007 "w_mbytes_per_sec": 0 00:13:19.007 }, 00:13:19.007 "claimed": true, 00:13:19.007 "claim_type": "exclusive_write", 00:13:19.007 "zoned": false, 00:13:19.007 "supported_io_types": { 00:13:19.007 "read": true, 00:13:19.007 "write": true, 00:13:19.007 "unmap": true, 00:13:19.007 "flush": true, 00:13:19.007 "reset": true, 00:13:19.007 "nvme_admin": false, 00:13:19.007 "nvme_io": false, 00:13:19.007 "nvme_io_md": false, 00:13:19.007 "write_zeroes": true, 00:13:19.007 "zcopy": true, 00:13:19.007 "get_zone_info": false, 00:13:19.007 "zone_management": false, 00:13:19.007 "zone_append": false, 00:13:19.007 "compare": false, 00:13:19.007 "compare_and_write": false, 00:13:19.007 "abort": true, 00:13:19.007 "seek_hole": false, 00:13:19.007 "seek_data": false, 00:13:19.007 "copy": true, 00:13:19.007 "nvme_iov_md": false 00:13:19.007 }, 00:13:19.007 "memory_domains": [ 00:13:19.007 { 00:13:19.007 "dma_device_id": "system", 00:13:19.007 "dma_device_type": 1 00:13:19.007 }, 00:13:19.007 { 00:13:19.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.007 "dma_device_type": 2 00:13:19.007 } 00:13:19.007 ], 00:13:19.007 "driver_specific": {} 00:13:19.007 } 00:13:19.007 ] 00:13:19.007 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.007 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:19.007 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:19.007 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.007 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.007 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.007 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.007 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.007 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.007 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.007 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.007 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.007 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.007 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.007 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.007 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.007 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.007 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.007 "name": "Existed_Raid", 00:13:19.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.007 "strip_size_kb": 0, 00:13:19.007 "state": "configuring", 00:13:19.007 "raid_level": "raid1", 00:13:19.007 "superblock": false, 00:13:19.007 "num_base_bdevs": 4, 00:13:19.007 "num_base_bdevs_discovered": 3, 00:13:19.007 "num_base_bdevs_operational": 4, 00:13:19.007 "base_bdevs_list": [ 00:13:19.007 { 00:13:19.007 "name": "BaseBdev1", 00:13:19.007 "uuid": "7cacf60a-af4c-49e5-9670-cf2cca3adbee", 00:13:19.007 "is_configured": true, 00:13:19.007 "data_offset": 0, 00:13:19.007 "data_size": 65536 00:13:19.007 }, 00:13:19.007 { 00:13:19.007 "name": null, 00:13:19.007 "uuid": "9f01fcfa-ca78-47cc-83b9-be892c4d0423", 00:13:19.007 "is_configured": false, 00:13:19.007 "data_offset": 0, 00:13:19.007 "data_size": 65536 00:13:19.007 }, 00:13:19.007 { 00:13:19.007 "name": "BaseBdev3", 00:13:19.007 "uuid": "19031c31-5eab-4aab-9596-0dec95afebd8", 00:13:19.007 "is_configured": true, 00:13:19.007 "data_offset": 0, 00:13:19.007 "data_size": 65536 00:13:19.007 }, 00:13:19.007 { 00:13:19.007 "name": "BaseBdev4", 00:13:19.007 "uuid": "e8463672-50b4-42fb-a6a9-78b9ca724520", 00:13:19.007 "is_configured": true, 00:13:19.007 "data_offset": 0, 00:13:19.007 "data_size": 65536 00:13:19.007 } 00:13:19.007 ] 00:13:19.007 }' 00:13:19.007 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.007 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.576 [2024-12-06 04:04:12.690080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.576 "name": "Existed_Raid", 00:13:19.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.576 "strip_size_kb": 0, 00:13:19.576 "state": "configuring", 00:13:19.576 "raid_level": "raid1", 00:13:19.576 "superblock": false, 00:13:19.576 "num_base_bdevs": 4, 00:13:19.576 "num_base_bdevs_discovered": 2, 00:13:19.576 "num_base_bdevs_operational": 4, 00:13:19.576 "base_bdevs_list": [ 00:13:19.576 { 00:13:19.576 "name": "BaseBdev1", 00:13:19.576 "uuid": "7cacf60a-af4c-49e5-9670-cf2cca3adbee", 00:13:19.576 "is_configured": true, 00:13:19.576 "data_offset": 0, 00:13:19.576 "data_size": 65536 00:13:19.576 }, 00:13:19.576 { 00:13:19.576 "name": null, 00:13:19.576 "uuid": "9f01fcfa-ca78-47cc-83b9-be892c4d0423", 00:13:19.576 "is_configured": false, 00:13:19.576 "data_offset": 0, 00:13:19.576 "data_size": 65536 00:13:19.576 }, 00:13:19.576 { 00:13:19.576 "name": null, 00:13:19.576 "uuid": "19031c31-5eab-4aab-9596-0dec95afebd8", 00:13:19.576 "is_configured": false, 00:13:19.576 "data_offset": 0, 00:13:19.576 "data_size": 65536 00:13:19.576 }, 00:13:19.576 { 00:13:19.576 "name": "BaseBdev4", 00:13:19.576 "uuid": "e8463672-50b4-42fb-a6a9-78b9ca724520", 00:13:19.576 "is_configured": true, 00:13:19.576 "data_offset": 0, 00:13:19.576 "data_size": 65536 00:13:19.576 } 00:13:19.576 ] 00:13:19.576 }' 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.576 04:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.145 [2024-12-06 04:04:13.269177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.145 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.145 "name": "Existed_Raid", 00:13:20.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.145 "strip_size_kb": 0, 00:13:20.145 "state": "configuring", 00:13:20.145 "raid_level": "raid1", 00:13:20.145 "superblock": false, 00:13:20.145 "num_base_bdevs": 4, 00:13:20.145 "num_base_bdevs_discovered": 3, 00:13:20.145 "num_base_bdevs_operational": 4, 00:13:20.145 "base_bdevs_list": [ 00:13:20.145 { 00:13:20.146 "name": "BaseBdev1", 00:13:20.146 "uuid": "7cacf60a-af4c-49e5-9670-cf2cca3adbee", 00:13:20.146 "is_configured": true, 00:13:20.146 "data_offset": 0, 00:13:20.146 "data_size": 65536 00:13:20.146 }, 00:13:20.146 { 00:13:20.146 "name": null, 00:13:20.146 "uuid": "9f01fcfa-ca78-47cc-83b9-be892c4d0423", 00:13:20.146 "is_configured": false, 00:13:20.146 "data_offset": 0, 00:13:20.146 "data_size": 65536 00:13:20.146 }, 00:13:20.146 { 00:13:20.146 "name": "BaseBdev3", 00:13:20.146 "uuid": "19031c31-5eab-4aab-9596-0dec95afebd8", 00:13:20.146 "is_configured": true, 00:13:20.146 "data_offset": 0, 00:13:20.146 "data_size": 65536 00:13:20.146 }, 00:13:20.146 { 00:13:20.146 "name": "BaseBdev4", 00:13:20.146 "uuid": "e8463672-50b4-42fb-a6a9-78b9ca724520", 00:13:20.146 "is_configured": true, 00:13:20.146 "data_offset": 0, 00:13:20.146 "data_size": 65536 00:13:20.146 } 00:13:20.146 ] 00:13:20.146 }' 00:13:20.146 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.146 04:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.711 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.711 04:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.711 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:20.711 04:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.711 04:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.712 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:20.712 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:20.712 04:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.712 04:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.712 [2024-12-06 04:04:13.844301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:20.712 04:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.712 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:20.712 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.712 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.712 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.712 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.712 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:20.712 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.712 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.712 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.712 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.712 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.712 04:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.712 04:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.712 04:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.712 04:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.712 04:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.712 "name": "Existed_Raid", 00:13:20.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.712 "strip_size_kb": 0, 00:13:20.712 "state": "configuring", 00:13:20.712 "raid_level": "raid1", 00:13:20.712 "superblock": false, 00:13:20.712 "num_base_bdevs": 4, 00:13:20.712 "num_base_bdevs_discovered": 2, 00:13:20.712 "num_base_bdevs_operational": 4, 00:13:20.712 "base_bdevs_list": [ 00:13:20.712 { 00:13:20.712 "name": null, 00:13:20.712 "uuid": "7cacf60a-af4c-49e5-9670-cf2cca3adbee", 00:13:20.712 "is_configured": false, 00:13:20.712 "data_offset": 0, 00:13:20.712 "data_size": 65536 00:13:20.712 }, 00:13:20.712 { 00:13:20.712 "name": null, 00:13:20.712 "uuid": "9f01fcfa-ca78-47cc-83b9-be892c4d0423", 00:13:20.712 "is_configured": false, 00:13:20.712 "data_offset": 0, 00:13:20.712 "data_size": 65536 00:13:20.712 }, 00:13:20.712 { 00:13:20.712 "name": "BaseBdev3", 00:13:20.712 "uuid": "19031c31-5eab-4aab-9596-0dec95afebd8", 00:13:20.712 "is_configured": true, 00:13:20.712 "data_offset": 0, 00:13:20.712 "data_size": 65536 00:13:20.712 }, 00:13:20.712 { 00:13:20.712 "name": "BaseBdev4", 00:13:20.712 "uuid": "e8463672-50b4-42fb-a6a9-78b9ca724520", 00:13:20.712 "is_configured": true, 00:13:20.712 "data_offset": 0, 00:13:20.712 "data_size": 65536 00:13:20.712 } 00:13:20.712 ] 00:13:20.712 }' 00:13:20.712 04:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.712 04:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.277 04:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.277 04:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.278 [2024-12-06 04:04:14.501332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.278 "name": "Existed_Raid", 00:13:21.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.278 "strip_size_kb": 0, 00:13:21.278 "state": "configuring", 00:13:21.278 "raid_level": "raid1", 00:13:21.278 "superblock": false, 00:13:21.278 "num_base_bdevs": 4, 00:13:21.278 "num_base_bdevs_discovered": 3, 00:13:21.278 "num_base_bdevs_operational": 4, 00:13:21.278 "base_bdevs_list": [ 00:13:21.278 { 00:13:21.278 "name": null, 00:13:21.278 "uuid": "7cacf60a-af4c-49e5-9670-cf2cca3adbee", 00:13:21.278 "is_configured": false, 00:13:21.278 "data_offset": 0, 00:13:21.278 "data_size": 65536 00:13:21.278 }, 00:13:21.278 { 00:13:21.278 "name": "BaseBdev2", 00:13:21.278 "uuid": "9f01fcfa-ca78-47cc-83b9-be892c4d0423", 00:13:21.278 "is_configured": true, 00:13:21.278 "data_offset": 0, 00:13:21.278 "data_size": 65536 00:13:21.278 }, 00:13:21.278 { 00:13:21.278 "name": "BaseBdev3", 00:13:21.278 "uuid": "19031c31-5eab-4aab-9596-0dec95afebd8", 00:13:21.278 "is_configured": true, 00:13:21.278 "data_offset": 0, 00:13:21.278 "data_size": 65536 00:13:21.278 }, 00:13:21.278 { 00:13:21.278 "name": "BaseBdev4", 00:13:21.278 "uuid": "e8463672-50b4-42fb-a6a9-78b9ca724520", 00:13:21.278 "is_configured": true, 00:13:21.278 "data_offset": 0, 00:13:21.278 "data_size": 65536 00:13:21.278 } 00:13:21.278 ] 00:13:21.278 }' 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.278 04:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.845 04:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.845 04:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.845 04:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:21.845 04:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7cacf60a-af4c-49e5-9670-cf2cca3adbee 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.845 [2024-12-06 04:04:15.138470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:21.845 [2024-12-06 04:04:15.138628] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:21.845 [2024-12-06 04:04:15.138680] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:21.845 [2024-12-06 04:04:15.139052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:21.845 [2024-12-06 04:04:15.139321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:21.845 [2024-12-06 04:04:15.139369] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:21.845 [2024-12-06 04:04:15.139765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.845 NewBaseBdev 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.845 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.845 [ 00:13:21.845 { 00:13:21.845 "name": "NewBaseBdev", 00:13:21.845 "aliases": [ 00:13:21.845 "7cacf60a-af4c-49e5-9670-cf2cca3adbee" 00:13:21.845 ], 00:13:21.845 "product_name": "Malloc disk", 00:13:21.845 "block_size": 512, 00:13:21.845 "num_blocks": 65536, 00:13:21.845 "uuid": "7cacf60a-af4c-49e5-9670-cf2cca3adbee", 00:13:21.845 "assigned_rate_limits": { 00:13:21.845 "rw_ios_per_sec": 0, 00:13:21.845 "rw_mbytes_per_sec": 0, 00:13:21.845 "r_mbytes_per_sec": 0, 00:13:21.845 "w_mbytes_per_sec": 0 00:13:21.845 }, 00:13:21.845 "claimed": true, 00:13:21.845 "claim_type": "exclusive_write", 00:13:21.845 "zoned": false, 00:13:21.845 "supported_io_types": { 00:13:21.845 "read": true, 00:13:21.845 "write": true, 00:13:21.845 "unmap": true, 00:13:21.845 "flush": true, 00:13:21.845 "reset": true, 00:13:21.845 "nvme_admin": false, 00:13:21.845 "nvme_io": false, 00:13:21.845 "nvme_io_md": false, 00:13:21.845 "write_zeroes": true, 00:13:21.845 "zcopy": true, 00:13:21.845 "get_zone_info": false, 00:13:21.845 "zone_management": false, 00:13:21.845 "zone_append": false, 00:13:21.845 "compare": false, 00:13:21.845 "compare_and_write": false, 00:13:21.845 "abort": true, 00:13:21.846 "seek_hole": false, 00:13:21.846 "seek_data": false, 00:13:21.846 "copy": true, 00:13:21.846 "nvme_iov_md": false 00:13:21.846 }, 00:13:21.846 "memory_domains": [ 00:13:21.846 { 00:13:21.846 "dma_device_id": "system", 00:13:21.846 "dma_device_type": 1 00:13:21.846 }, 00:13:21.846 { 00:13:21.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.846 "dma_device_type": 2 00:13:21.846 } 00:13:21.846 ], 00:13:21.846 "driver_specific": {} 00:13:21.846 } 00:13:21.846 ] 00:13:21.846 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.846 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:21.846 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:21.846 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.846 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.846 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.846 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.846 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:21.846 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.846 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.846 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.846 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.846 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.846 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.846 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.846 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.104 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.104 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.104 "name": "Existed_Raid", 00:13:22.104 "uuid": "ed230ba8-1f82-480d-8a3a-c4da07a46450", 00:13:22.104 "strip_size_kb": 0, 00:13:22.104 "state": "online", 00:13:22.104 "raid_level": "raid1", 00:13:22.104 "superblock": false, 00:13:22.104 "num_base_bdevs": 4, 00:13:22.104 "num_base_bdevs_discovered": 4, 00:13:22.104 "num_base_bdevs_operational": 4, 00:13:22.104 "base_bdevs_list": [ 00:13:22.104 { 00:13:22.104 "name": "NewBaseBdev", 00:13:22.104 "uuid": "7cacf60a-af4c-49e5-9670-cf2cca3adbee", 00:13:22.104 "is_configured": true, 00:13:22.104 "data_offset": 0, 00:13:22.104 "data_size": 65536 00:13:22.104 }, 00:13:22.104 { 00:13:22.104 "name": "BaseBdev2", 00:13:22.104 "uuid": "9f01fcfa-ca78-47cc-83b9-be892c4d0423", 00:13:22.104 "is_configured": true, 00:13:22.104 "data_offset": 0, 00:13:22.104 "data_size": 65536 00:13:22.104 }, 00:13:22.104 { 00:13:22.104 "name": "BaseBdev3", 00:13:22.104 "uuid": "19031c31-5eab-4aab-9596-0dec95afebd8", 00:13:22.104 "is_configured": true, 00:13:22.104 "data_offset": 0, 00:13:22.104 "data_size": 65536 00:13:22.104 }, 00:13:22.104 { 00:13:22.104 "name": "BaseBdev4", 00:13:22.104 "uuid": "e8463672-50b4-42fb-a6a9-78b9ca724520", 00:13:22.104 "is_configured": true, 00:13:22.104 "data_offset": 0, 00:13:22.104 "data_size": 65536 00:13:22.104 } 00:13:22.104 ] 00:13:22.104 }' 00:13:22.104 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.104 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.362 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:22.362 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:22.362 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:22.362 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:22.362 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:22.362 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:22.362 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:22.362 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:22.362 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.362 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.362 [2024-12-06 04:04:15.654177] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:22.362 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.362 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:22.362 "name": "Existed_Raid", 00:13:22.362 "aliases": [ 00:13:22.362 "ed230ba8-1f82-480d-8a3a-c4da07a46450" 00:13:22.362 ], 00:13:22.362 "product_name": "Raid Volume", 00:13:22.362 "block_size": 512, 00:13:22.362 "num_blocks": 65536, 00:13:22.362 "uuid": "ed230ba8-1f82-480d-8a3a-c4da07a46450", 00:13:22.362 "assigned_rate_limits": { 00:13:22.363 "rw_ios_per_sec": 0, 00:13:22.363 "rw_mbytes_per_sec": 0, 00:13:22.363 "r_mbytes_per_sec": 0, 00:13:22.363 "w_mbytes_per_sec": 0 00:13:22.363 }, 00:13:22.363 "claimed": false, 00:13:22.363 "zoned": false, 00:13:22.363 "supported_io_types": { 00:13:22.363 "read": true, 00:13:22.363 "write": true, 00:13:22.363 "unmap": false, 00:13:22.363 "flush": false, 00:13:22.363 "reset": true, 00:13:22.363 "nvme_admin": false, 00:13:22.363 "nvme_io": false, 00:13:22.363 "nvme_io_md": false, 00:13:22.363 "write_zeroes": true, 00:13:22.363 "zcopy": false, 00:13:22.363 "get_zone_info": false, 00:13:22.363 "zone_management": false, 00:13:22.363 "zone_append": false, 00:13:22.363 "compare": false, 00:13:22.363 "compare_and_write": false, 00:13:22.363 "abort": false, 00:13:22.363 "seek_hole": false, 00:13:22.363 "seek_data": false, 00:13:22.363 "copy": false, 00:13:22.363 "nvme_iov_md": false 00:13:22.363 }, 00:13:22.363 "memory_domains": [ 00:13:22.363 { 00:13:22.363 "dma_device_id": "system", 00:13:22.363 "dma_device_type": 1 00:13:22.363 }, 00:13:22.363 { 00:13:22.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.363 "dma_device_type": 2 00:13:22.363 }, 00:13:22.363 { 00:13:22.363 "dma_device_id": "system", 00:13:22.363 "dma_device_type": 1 00:13:22.363 }, 00:13:22.363 { 00:13:22.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.363 "dma_device_type": 2 00:13:22.363 }, 00:13:22.363 { 00:13:22.363 "dma_device_id": "system", 00:13:22.363 "dma_device_type": 1 00:13:22.363 }, 00:13:22.363 { 00:13:22.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.363 "dma_device_type": 2 00:13:22.363 }, 00:13:22.363 { 00:13:22.363 "dma_device_id": "system", 00:13:22.363 "dma_device_type": 1 00:13:22.363 }, 00:13:22.363 { 00:13:22.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.363 "dma_device_type": 2 00:13:22.363 } 00:13:22.363 ], 00:13:22.363 "driver_specific": { 00:13:22.363 "raid": { 00:13:22.363 "uuid": "ed230ba8-1f82-480d-8a3a-c4da07a46450", 00:13:22.363 "strip_size_kb": 0, 00:13:22.363 "state": "online", 00:13:22.363 "raid_level": "raid1", 00:13:22.363 "superblock": false, 00:13:22.363 "num_base_bdevs": 4, 00:13:22.363 "num_base_bdevs_discovered": 4, 00:13:22.363 "num_base_bdevs_operational": 4, 00:13:22.363 "base_bdevs_list": [ 00:13:22.363 { 00:13:22.363 "name": "NewBaseBdev", 00:13:22.363 "uuid": "7cacf60a-af4c-49e5-9670-cf2cca3adbee", 00:13:22.363 "is_configured": true, 00:13:22.363 "data_offset": 0, 00:13:22.363 "data_size": 65536 00:13:22.363 }, 00:13:22.363 { 00:13:22.363 "name": "BaseBdev2", 00:13:22.363 "uuid": "9f01fcfa-ca78-47cc-83b9-be892c4d0423", 00:13:22.363 "is_configured": true, 00:13:22.363 "data_offset": 0, 00:13:22.363 "data_size": 65536 00:13:22.363 }, 00:13:22.363 { 00:13:22.363 "name": "BaseBdev3", 00:13:22.363 "uuid": "19031c31-5eab-4aab-9596-0dec95afebd8", 00:13:22.363 "is_configured": true, 00:13:22.363 "data_offset": 0, 00:13:22.363 "data_size": 65536 00:13:22.363 }, 00:13:22.363 { 00:13:22.363 "name": "BaseBdev4", 00:13:22.363 "uuid": "e8463672-50b4-42fb-a6a9-78b9ca724520", 00:13:22.363 "is_configured": true, 00:13:22.363 "data_offset": 0, 00:13:22.363 "data_size": 65536 00:13:22.363 } 00:13:22.363 ] 00:13:22.363 } 00:13:22.363 } 00:13:22.363 }' 00:13:22.363 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:22.620 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:22.620 BaseBdev2 00:13:22.620 BaseBdev3 00:13:22.620 BaseBdev4' 00:13:22.620 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.620 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:22.620 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.620 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:22.620 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.620 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.620 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.620 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.620 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.620 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.620 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.620 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:22.621 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.621 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.621 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.621 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.621 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.621 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.621 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.621 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:22.621 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.621 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.621 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.621 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.621 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.621 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.621 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.621 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.621 04:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:22.621 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.621 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.879 04:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.879 04:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.879 04:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.879 04:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:22.879 04:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.879 04:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.879 [2024-12-06 04:04:16.017151] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:22.879 [2024-12-06 04:04:16.017313] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:22.879 [2024-12-06 04:04:16.017511] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.879 [2024-12-06 04:04:16.018023] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.879 [2024-12-06 04:04:16.018170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:22.879 04:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.879 04:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73292 00:13:22.879 04:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73292 ']' 00:13:22.879 04:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73292 00:13:22.879 04:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:22.879 04:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.879 04:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73292 00:13:22.879 04:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.879 04:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.879 04:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73292' 00:13:22.879 killing process with pid 73292 00:13:22.879 04:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73292 00:13:22.879 [2024-12-06 04:04:16.072820] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:22.879 04:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73292 00:13:23.470 [2024-12-06 04:04:16.604089] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:24.851 04:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:24.851 00:13:24.851 real 0m13.342s 00:13:24.851 user 0m20.627s 00:13:24.851 sys 0m2.658s 00:13:24.851 04:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.851 ************************************ 00:13:24.851 END TEST raid_state_function_test 00:13:24.851 ************************************ 00:13:24.851 04:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.852 04:04:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:13:24.852 04:04:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:24.852 04:04:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.852 04:04:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:24.852 ************************************ 00:13:24.852 START TEST raid_state_function_test_sb 00:13:24.852 ************************************ 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73981 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73981' 00:13:24.852 Process raid pid: 73981 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73981 00:13:24.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73981 ']' 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.852 04:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.112 [2024-12-06 04:04:18.283592] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:13:25.112 [2024-12-06 04:04:18.283747] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.372 [2024-12-06 04:04:18.465711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.372 [2024-12-06 04:04:18.633541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.631 [2024-12-06 04:04:18.914416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.631 [2024-12-06 04:04:18.914475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.891 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:25.891 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:25.891 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:25.891 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.891 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.891 [2024-12-06 04:04:19.189940] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:25.891 [2024-12-06 04:04:19.190114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:25.891 [2024-12-06 04:04:19.190171] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:25.891 [2024-12-06 04:04:19.190219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:25.891 [2024-12-06 04:04:19.190262] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:25.891 [2024-12-06 04:04:19.190288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:25.891 [2024-12-06 04:04:19.190332] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:25.891 [2024-12-06 04:04:19.190356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:25.891 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.891 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:25.891 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.891 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.891 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.891 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.891 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:25.891 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.891 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.891 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.891 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.891 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.891 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.891 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.891 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.891 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.150 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.150 "name": "Existed_Raid", 00:13:26.150 "uuid": "0fe9c8f8-b341-475b-922c-a27bb881a3d0", 00:13:26.150 "strip_size_kb": 0, 00:13:26.150 "state": "configuring", 00:13:26.150 "raid_level": "raid1", 00:13:26.150 "superblock": true, 00:13:26.150 "num_base_bdevs": 4, 00:13:26.150 "num_base_bdevs_discovered": 0, 00:13:26.150 "num_base_bdevs_operational": 4, 00:13:26.150 "base_bdevs_list": [ 00:13:26.150 { 00:13:26.150 "name": "BaseBdev1", 00:13:26.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.150 "is_configured": false, 00:13:26.150 "data_offset": 0, 00:13:26.150 "data_size": 0 00:13:26.150 }, 00:13:26.150 { 00:13:26.150 "name": "BaseBdev2", 00:13:26.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.150 "is_configured": false, 00:13:26.150 "data_offset": 0, 00:13:26.150 "data_size": 0 00:13:26.150 }, 00:13:26.150 { 00:13:26.150 "name": "BaseBdev3", 00:13:26.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.150 "is_configured": false, 00:13:26.150 "data_offset": 0, 00:13:26.150 "data_size": 0 00:13:26.150 }, 00:13:26.150 { 00:13:26.150 "name": "BaseBdev4", 00:13:26.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.150 "is_configured": false, 00:13:26.150 "data_offset": 0, 00:13:26.150 "data_size": 0 00:13:26.150 } 00:13:26.150 ] 00:13:26.150 }' 00:13:26.150 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.150 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.409 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:26.409 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.409 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.409 [2024-12-06 04:04:19.673095] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:26.409 [2024-12-06 04:04:19.673155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:26.409 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.409 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:26.409 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.409 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.409 [2024-12-06 04:04:19.685095] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:26.410 [2024-12-06 04:04:19.685241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:26.410 [2024-12-06 04:04:19.685284] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:26.410 [2024-12-06 04:04:19.685359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:26.410 [2024-12-06 04:04:19.685392] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:26.410 [2024-12-06 04:04:19.685423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:26.410 [2024-12-06 04:04:19.685489] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:26.410 [2024-12-06 04:04:19.685521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:26.410 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.410 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:26.410 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.410 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.410 [2024-12-06 04:04:19.748285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:26.410 BaseBdev1 00:13:26.410 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.410 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:26.410 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:26.410 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:26.410 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:26.410 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:26.410 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:26.410 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:26.410 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.410 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.410 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.410 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:26.410 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.410 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.670 [ 00:13:26.670 { 00:13:26.670 "name": "BaseBdev1", 00:13:26.670 "aliases": [ 00:13:26.670 "5eb4bfa1-812c-4d9c-b769-dc622bfbf4de" 00:13:26.670 ], 00:13:26.670 "product_name": "Malloc disk", 00:13:26.670 "block_size": 512, 00:13:26.670 "num_blocks": 65536, 00:13:26.670 "uuid": "5eb4bfa1-812c-4d9c-b769-dc622bfbf4de", 00:13:26.670 "assigned_rate_limits": { 00:13:26.670 "rw_ios_per_sec": 0, 00:13:26.670 "rw_mbytes_per_sec": 0, 00:13:26.670 "r_mbytes_per_sec": 0, 00:13:26.670 "w_mbytes_per_sec": 0 00:13:26.670 }, 00:13:26.670 "claimed": true, 00:13:26.670 "claim_type": "exclusive_write", 00:13:26.670 "zoned": false, 00:13:26.670 "supported_io_types": { 00:13:26.670 "read": true, 00:13:26.670 "write": true, 00:13:26.670 "unmap": true, 00:13:26.670 "flush": true, 00:13:26.670 "reset": true, 00:13:26.670 "nvme_admin": false, 00:13:26.670 "nvme_io": false, 00:13:26.670 "nvme_io_md": false, 00:13:26.670 "write_zeroes": true, 00:13:26.670 "zcopy": true, 00:13:26.670 "get_zone_info": false, 00:13:26.670 "zone_management": false, 00:13:26.670 "zone_append": false, 00:13:26.670 "compare": false, 00:13:26.670 "compare_and_write": false, 00:13:26.670 "abort": true, 00:13:26.670 "seek_hole": false, 00:13:26.670 "seek_data": false, 00:13:26.670 "copy": true, 00:13:26.670 "nvme_iov_md": false 00:13:26.670 }, 00:13:26.670 "memory_domains": [ 00:13:26.670 { 00:13:26.670 "dma_device_id": "system", 00:13:26.670 "dma_device_type": 1 00:13:26.670 }, 00:13:26.670 { 00:13:26.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.670 "dma_device_type": 2 00:13:26.670 } 00:13:26.670 ], 00:13:26.670 "driver_specific": {} 00:13:26.670 } 00:13:26.670 ] 00:13:26.670 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.670 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:26.670 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:26.670 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.670 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.670 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.670 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.670 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.670 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.670 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.670 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.670 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.670 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.670 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.671 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.671 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.671 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.671 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.671 "name": "Existed_Raid", 00:13:26.671 "uuid": "bfe978df-3574-4bd7-988b-e784d864f251", 00:13:26.671 "strip_size_kb": 0, 00:13:26.671 "state": "configuring", 00:13:26.671 "raid_level": "raid1", 00:13:26.671 "superblock": true, 00:13:26.671 "num_base_bdevs": 4, 00:13:26.671 "num_base_bdevs_discovered": 1, 00:13:26.671 "num_base_bdevs_operational": 4, 00:13:26.671 "base_bdevs_list": [ 00:13:26.671 { 00:13:26.671 "name": "BaseBdev1", 00:13:26.671 "uuid": "5eb4bfa1-812c-4d9c-b769-dc622bfbf4de", 00:13:26.671 "is_configured": true, 00:13:26.671 "data_offset": 2048, 00:13:26.671 "data_size": 63488 00:13:26.671 }, 00:13:26.671 { 00:13:26.671 "name": "BaseBdev2", 00:13:26.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.671 "is_configured": false, 00:13:26.671 "data_offset": 0, 00:13:26.671 "data_size": 0 00:13:26.671 }, 00:13:26.671 { 00:13:26.671 "name": "BaseBdev3", 00:13:26.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.671 "is_configured": false, 00:13:26.671 "data_offset": 0, 00:13:26.671 "data_size": 0 00:13:26.671 }, 00:13:26.671 { 00:13:26.671 "name": "BaseBdev4", 00:13:26.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.671 "is_configured": false, 00:13:26.671 "data_offset": 0, 00:13:26.671 "data_size": 0 00:13:26.671 } 00:13:26.671 ] 00:13:26.671 }' 00:13:26.671 04:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.671 04:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.931 [2024-12-06 04:04:20.231694] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:26.931 [2024-12-06 04:04:20.231902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.931 [2024-12-06 04:04:20.243639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:26.931 [2024-12-06 04:04:20.246144] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:26.931 [2024-12-06 04:04:20.246240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:26.931 [2024-12-06 04:04:20.246275] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:26.931 [2024-12-06 04:04:20.246306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:26.931 [2024-12-06 04:04:20.246329] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:26.931 [2024-12-06 04:04:20.246354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.931 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.932 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.192 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.192 "name": "Existed_Raid", 00:13:27.192 "uuid": "05f746c7-2eb1-47de-8aec-58b6234c9d26", 00:13:27.192 "strip_size_kb": 0, 00:13:27.192 "state": "configuring", 00:13:27.192 "raid_level": "raid1", 00:13:27.192 "superblock": true, 00:13:27.192 "num_base_bdevs": 4, 00:13:27.192 "num_base_bdevs_discovered": 1, 00:13:27.192 "num_base_bdevs_operational": 4, 00:13:27.192 "base_bdevs_list": [ 00:13:27.192 { 00:13:27.192 "name": "BaseBdev1", 00:13:27.192 "uuid": "5eb4bfa1-812c-4d9c-b769-dc622bfbf4de", 00:13:27.192 "is_configured": true, 00:13:27.192 "data_offset": 2048, 00:13:27.192 "data_size": 63488 00:13:27.192 }, 00:13:27.192 { 00:13:27.192 "name": "BaseBdev2", 00:13:27.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.192 "is_configured": false, 00:13:27.192 "data_offset": 0, 00:13:27.192 "data_size": 0 00:13:27.192 }, 00:13:27.192 { 00:13:27.192 "name": "BaseBdev3", 00:13:27.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.192 "is_configured": false, 00:13:27.192 "data_offset": 0, 00:13:27.192 "data_size": 0 00:13:27.192 }, 00:13:27.192 { 00:13:27.192 "name": "BaseBdev4", 00:13:27.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.192 "is_configured": false, 00:13:27.192 "data_offset": 0, 00:13:27.192 "data_size": 0 00:13:27.192 } 00:13:27.192 ] 00:13:27.192 }' 00:13:27.192 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.192 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.452 [2024-12-06 04:04:20.736303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:27.452 BaseBdev2 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.452 [ 00:13:27.452 { 00:13:27.452 "name": "BaseBdev2", 00:13:27.452 "aliases": [ 00:13:27.452 "8edf4ec6-f28f-4891-93a9-c559be15c57d" 00:13:27.452 ], 00:13:27.452 "product_name": "Malloc disk", 00:13:27.452 "block_size": 512, 00:13:27.452 "num_blocks": 65536, 00:13:27.452 "uuid": "8edf4ec6-f28f-4891-93a9-c559be15c57d", 00:13:27.452 "assigned_rate_limits": { 00:13:27.452 "rw_ios_per_sec": 0, 00:13:27.452 "rw_mbytes_per_sec": 0, 00:13:27.452 "r_mbytes_per_sec": 0, 00:13:27.452 "w_mbytes_per_sec": 0 00:13:27.452 }, 00:13:27.452 "claimed": true, 00:13:27.452 "claim_type": "exclusive_write", 00:13:27.452 "zoned": false, 00:13:27.452 "supported_io_types": { 00:13:27.452 "read": true, 00:13:27.452 "write": true, 00:13:27.452 "unmap": true, 00:13:27.452 "flush": true, 00:13:27.452 "reset": true, 00:13:27.452 "nvme_admin": false, 00:13:27.452 "nvme_io": false, 00:13:27.452 "nvme_io_md": false, 00:13:27.452 "write_zeroes": true, 00:13:27.452 "zcopy": true, 00:13:27.452 "get_zone_info": false, 00:13:27.452 "zone_management": false, 00:13:27.452 "zone_append": false, 00:13:27.452 "compare": false, 00:13:27.452 "compare_and_write": false, 00:13:27.452 "abort": true, 00:13:27.452 "seek_hole": false, 00:13:27.452 "seek_data": false, 00:13:27.452 "copy": true, 00:13:27.452 "nvme_iov_md": false 00:13:27.452 }, 00:13:27.452 "memory_domains": [ 00:13:27.452 { 00:13:27.452 "dma_device_id": "system", 00:13:27.452 "dma_device_type": 1 00:13:27.452 }, 00:13:27.452 { 00:13:27.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.452 "dma_device_type": 2 00:13:27.452 } 00:13:27.452 ], 00:13:27.452 "driver_specific": {} 00:13:27.452 } 00:13:27.452 ] 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.452 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.712 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.712 "name": "Existed_Raid", 00:13:27.712 "uuid": "05f746c7-2eb1-47de-8aec-58b6234c9d26", 00:13:27.712 "strip_size_kb": 0, 00:13:27.712 "state": "configuring", 00:13:27.712 "raid_level": "raid1", 00:13:27.712 "superblock": true, 00:13:27.712 "num_base_bdevs": 4, 00:13:27.712 "num_base_bdevs_discovered": 2, 00:13:27.712 "num_base_bdevs_operational": 4, 00:13:27.712 "base_bdevs_list": [ 00:13:27.712 { 00:13:27.712 "name": "BaseBdev1", 00:13:27.712 "uuid": "5eb4bfa1-812c-4d9c-b769-dc622bfbf4de", 00:13:27.712 "is_configured": true, 00:13:27.712 "data_offset": 2048, 00:13:27.712 "data_size": 63488 00:13:27.712 }, 00:13:27.712 { 00:13:27.712 "name": "BaseBdev2", 00:13:27.712 "uuid": "8edf4ec6-f28f-4891-93a9-c559be15c57d", 00:13:27.712 "is_configured": true, 00:13:27.712 "data_offset": 2048, 00:13:27.712 "data_size": 63488 00:13:27.712 }, 00:13:27.712 { 00:13:27.712 "name": "BaseBdev3", 00:13:27.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.712 "is_configured": false, 00:13:27.712 "data_offset": 0, 00:13:27.712 "data_size": 0 00:13:27.712 }, 00:13:27.712 { 00:13:27.712 "name": "BaseBdev4", 00:13:27.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.712 "is_configured": false, 00:13:27.712 "data_offset": 0, 00:13:27.712 "data_size": 0 00:13:27.712 } 00:13:27.712 ] 00:13:27.712 }' 00:13:27.712 04:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.712 04:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.973 [2024-12-06 04:04:21.281970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:27.973 BaseBdev3 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.973 [ 00:13:27.973 { 00:13:27.973 "name": "BaseBdev3", 00:13:27.973 "aliases": [ 00:13:27.973 "a1085749-3cce-4e83-899d-07c91e281701" 00:13:27.973 ], 00:13:27.973 "product_name": "Malloc disk", 00:13:27.973 "block_size": 512, 00:13:27.973 "num_blocks": 65536, 00:13:27.973 "uuid": "a1085749-3cce-4e83-899d-07c91e281701", 00:13:27.973 "assigned_rate_limits": { 00:13:27.973 "rw_ios_per_sec": 0, 00:13:27.973 "rw_mbytes_per_sec": 0, 00:13:27.973 "r_mbytes_per_sec": 0, 00:13:27.973 "w_mbytes_per_sec": 0 00:13:27.973 }, 00:13:27.973 "claimed": true, 00:13:27.973 "claim_type": "exclusive_write", 00:13:27.973 "zoned": false, 00:13:27.973 "supported_io_types": { 00:13:27.973 "read": true, 00:13:27.973 "write": true, 00:13:27.973 "unmap": true, 00:13:27.973 "flush": true, 00:13:27.973 "reset": true, 00:13:27.973 "nvme_admin": false, 00:13:27.973 "nvme_io": false, 00:13:27.973 "nvme_io_md": false, 00:13:27.973 "write_zeroes": true, 00:13:27.973 "zcopy": true, 00:13:27.973 "get_zone_info": false, 00:13:27.973 "zone_management": false, 00:13:27.973 "zone_append": false, 00:13:27.973 "compare": false, 00:13:27.973 "compare_and_write": false, 00:13:27.973 "abort": true, 00:13:27.973 "seek_hole": false, 00:13:27.973 "seek_data": false, 00:13:27.973 "copy": true, 00:13:27.973 "nvme_iov_md": false 00:13:27.973 }, 00:13:27.973 "memory_domains": [ 00:13:27.973 { 00:13:27.973 "dma_device_id": "system", 00:13:27.973 "dma_device_type": 1 00:13:27.973 }, 00:13:27.973 { 00:13:27.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.973 "dma_device_type": 2 00:13:27.973 } 00:13:27.973 ], 00:13:27.973 "driver_specific": {} 00:13:27.973 } 00:13:27.973 ] 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.973 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.234 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.234 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.234 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.234 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.234 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.234 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.234 "name": "Existed_Raid", 00:13:28.234 "uuid": "05f746c7-2eb1-47de-8aec-58b6234c9d26", 00:13:28.234 "strip_size_kb": 0, 00:13:28.234 "state": "configuring", 00:13:28.234 "raid_level": "raid1", 00:13:28.234 "superblock": true, 00:13:28.234 "num_base_bdevs": 4, 00:13:28.234 "num_base_bdevs_discovered": 3, 00:13:28.234 "num_base_bdevs_operational": 4, 00:13:28.234 "base_bdevs_list": [ 00:13:28.234 { 00:13:28.234 "name": "BaseBdev1", 00:13:28.234 "uuid": "5eb4bfa1-812c-4d9c-b769-dc622bfbf4de", 00:13:28.234 "is_configured": true, 00:13:28.234 "data_offset": 2048, 00:13:28.234 "data_size": 63488 00:13:28.234 }, 00:13:28.234 { 00:13:28.234 "name": "BaseBdev2", 00:13:28.234 "uuid": "8edf4ec6-f28f-4891-93a9-c559be15c57d", 00:13:28.234 "is_configured": true, 00:13:28.234 "data_offset": 2048, 00:13:28.234 "data_size": 63488 00:13:28.234 }, 00:13:28.234 { 00:13:28.234 "name": "BaseBdev3", 00:13:28.234 "uuid": "a1085749-3cce-4e83-899d-07c91e281701", 00:13:28.234 "is_configured": true, 00:13:28.234 "data_offset": 2048, 00:13:28.234 "data_size": 63488 00:13:28.234 }, 00:13:28.234 { 00:13:28.234 "name": "BaseBdev4", 00:13:28.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.234 "is_configured": false, 00:13:28.234 "data_offset": 0, 00:13:28.234 "data_size": 0 00:13:28.234 } 00:13:28.234 ] 00:13:28.234 }' 00:13:28.234 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.234 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.565 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:28.565 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.565 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.565 [2024-12-06 04:04:21.810695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:28.565 BaseBdev4 00:13:28.565 [2024-12-06 04:04:21.811168] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:28.565 [2024-12-06 04:04:21.811189] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:28.565 [2024-12-06 04:04:21.811506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:28.565 [2024-12-06 04:04:21.811681] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:28.565 [2024-12-06 04:04:21.811695] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:28.565 [2024-12-06 04:04:21.811870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.565 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.565 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:28.565 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:28.565 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:28.565 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:28.565 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:28.565 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:28.565 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:28.565 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.565 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.565 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.565 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:28.565 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.565 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.565 [ 00:13:28.565 { 00:13:28.565 "name": "BaseBdev4", 00:13:28.565 "aliases": [ 00:13:28.565 "30213a61-4105-4eab-b84c-123e15ad030d" 00:13:28.565 ], 00:13:28.565 "product_name": "Malloc disk", 00:13:28.565 "block_size": 512, 00:13:28.565 "num_blocks": 65536, 00:13:28.565 "uuid": "30213a61-4105-4eab-b84c-123e15ad030d", 00:13:28.566 "assigned_rate_limits": { 00:13:28.566 "rw_ios_per_sec": 0, 00:13:28.566 "rw_mbytes_per_sec": 0, 00:13:28.566 "r_mbytes_per_sec": 0, 00:13:28.566 "w_mbytes_per_sec": 0 00:13:28.566 }, 00:13:28.566 "claimed": true, 00:13:28.566 "claim_type": "exclusive_write", 00:13:28.566 "zoned": false, 00:13:28.566 "supported_io_types": { 00:13:28.566 "read": true, 00:13:28.566 "write": true, 00:13:28.566 "unmap": true, 00:13:28.566 "flush": true, 00:13:28.566 "reset": true, 00:13:28.566 "nvme_admin": false, 00:13:28.566 "nvme_io": false, 00:13:28.566 "nvme_io_md": false, 00:13:28.566 "write_zeroes": true, 00:13:28.566 "zcopy": true, 00:13:28.566 "get_zone_info": false, 00:13:28.566 "zone_management": false, 00:13:28.566 "zone_append": false, 00:13:28.566 "compare": false, 00:13:28.566 "compare_and_write": false, 00:13:28.566 "abort": true, 00:13:28.566 "seek_hole": false, 00:13:28.566 "seek_data": false, 00:13:28.566 "copy": true, 00:13:28.566 "nvme_iov_md": false 00:13:28.566 }, 00:13:28.566 "memory_domains": [ 00:13:28.566 { 00:13:28.566 "dma_device_id": "system", 00:13:28.566 "dma_device_type": 1 00:13:28.566 }, 00:13:28.566 { 00:13:28.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.566 "dma_device_type": 2 00:13:28.566 } 00:13:28.566 ], 00:13:28.566 "driver_specific": {} 00:13:28.566 } 00:13:28.566 ] 00:13:28.566 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.566 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:28.566 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:28.566 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:28.566 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:28.566 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.566 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.566 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.566 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.566 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.566 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.566 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.566 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.566 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.566 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.566 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.566 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.566 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.566 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.566 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.566 "name": "Existed_Raid", 00:13:28.566 "uuid": "05f746c7-2eb1-47de-8aec-58b6234c9d26", 00:13:28.566 "strip_size_kb": 0, 00:13:28.566 "state": "online", 00:13:28.566 "raid_level": "raid1", 00:13:28.566 "superblock": true, 00:13:28.566 "num_base_bdevs": 4, 00:13:28.566 "num_base_bdevs_discovered": 4, 00:13:28.566 "num_base_bdevs_operational": 4, 00:13:28.566 "base_bdevs_list": [ 00:13:28.566 { 00:13:28.566 "name": "BaseBdev1", 00:13:28.566 "uuid": "5eb4bfa1-812c-4d9c-b769-dc622bfbf4de", 00:13:28.566 "is_configured": true, 00:13:28.566 "data_offset": 2048, 00:13:28.566 "data_size": 63488 00:13:28.566 }, 00:13:28.566 { 00:13:28.566 "name": "BaseBdev2", 00:13:28.566 "uuid": "8edf4ec6-f28f-4891-93a9-c559be15c57d", 00:13:28.566 "is_configured": true, 00:13:28.566 "data_offset": 2048, 00:13:28.566 "data_size": 63488 00:13:28.566 }, 00:13:28.566 { 00:13:28.566 "name": "BaseBdev3", 00:13:28.566 "uuid": "a1085749-3cce-4e83-899d-07c91e281701", 00:13:28.566 "is_configured": true, 00:13:28.566 "data_offset": 2048, 00:13:28.566 "data_size": 63488 00:13:28.566 }, 00:13:28.566 { 00:13:28.566 "name": "BaseBdev4", 00:13:28.566 "uuid": "30213a61-4105-4eab-b84c-123e15ad030d", 00:13:28.566 "is_configured": true, 00:13:28.566 "data_offset": 2048, 00:13:28.566 "data_size": 63488 00:13:28.566 } 00:13:28.566 ] 00:13:28.566 }' 00:13:28.566 04:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.566 04:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.134 [2024-12-06 04:04:22.250390] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:29.134 "name": "Existed_Raid", 00:13:29.134 "aliases": [ 00:13:29.134 "05f746c7-2eb1-47de-8aec-58b6234c9d26" 00:13:29.134 ], 00:13:29.134 "product_name": "Raid Volume", 00:13:29.134 "block_size": 512, 00:13:29.134 "num_blocks": 63488, 00:13:29.134 "uuid": "05f746c7-2eb1-47de-8aec-58b6234c9d26", 00:13:29.134 "assigned_rate_limits": { 00:13:29.134 "rw_ios_per_sec": 0, 00:13:29.134 "rw_mbytes_per_sec": 0, 00:13:29.134 "r_mbytes_per_sec": 0, 00:13:29.134 "w_mbytes_per_sec": 0 00:13:29.134 }, 00:13:29.134 "claimed": false, 00:13:29.134 "zoned": false, 00:13:29.134 "supported_io_types": { 00:13:29.134 "read": true, 00:13:29.134 "write": true, 00:13:29.134 "unmap": false, 00:13:29.134 "flush": false, 00:13:29.134 "reset": true, 00:13:29.134 "nvme_admin": false, 00:13:29.134 "nvme_io": false, 00:13:29.134 "nvme_io_md": false, 00:13:29.134 "write_zeroes": true, 00:13:29.134 "zcopy": false, 00:13:29.134 "get_zone_info": false, 00:13:29.134 "zone_management": false, 00:13:29.134 "zone_append": false, 00:13:29.134 "compare": false, 00:13:29.134 "compare_and_write": false, 00:13:29.134 "abort": false, 00:13:29.134 "seek_hole": false, 00:13:29.134 "seek_data": false, 00:13:29.134 "copy": false, 00:13:29.134 "nvme_iov_md": false 00:13:29.134 }, 00:13:29.134 "memory_domains": [ 00:13:29.134 { 00:13:29.134 "dma_device_id": "system", 00:13:29.134 "dma_device_type": 1 00:13:29.134 }, 00:13:29.134 { 00:13:29.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.134 "dma_device_type": 2 00:13:29.134 }, 00:13:29.134 { 00:13:29.134 "dma_device_id": "system", 00:13:29.134 "dma_device_type": 1 00:13:29.134 }, 00:13:29.134 { 00:13:29.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.134 "dma_device_type": 2 00:13:29.134 }, 00:13:29.134 { 00:13:29.134 "dma_device_id": "system", 00:13:29.134 "dma_device_type": 1 00:13:29.134 }, 00:13:29.134 { 00:13:29.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.134 "dma_device_type": 2 00:13:29.134 }, 00:13:29.134 { 00:13:29.134 "dma_device_id": "system", 00:13:29.134 "dma_device_type": 1 00:13:29.134 }, 00:13:29.134 { 00:13:29.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.134 "dma_device_type": 2 00:13:29.134 } 00:13:29.134 ], 00:13:29.134 "driver_specific": { 00:13:29.134 "raid": { 00:13:29.134 "uuid": "05f746c7-2eb1-47de-8aec-58b6234c9d26", 00:13:29.134 "strip_size_kb": 0, 00:13:29.134 "state": "online", 00:13:29.134 "raid_level": "raid1", 00:13:29.134 "superblock": true, 00:13:29.134 "num_base_bdevs": 4, 00:13:29.134 "num_base_bdevs_discovered": 4, 00:13:29.134 "num_base_bdevs_operational": 4, 00:13:29.134 "base_bdevs_list": [ 00:13:29.134 { 00:13:29.134 "name": "BaseBdev1", 00:13:29.134 "uuid": "5eb4bfa1-812c-4d9c-b769-dc622bfbf4de", 00:13:29.134 "is_configured": true, 00:13:29.134 "data_offset": 2048, 00:13:29.134 "data_size": 63488 00:13:29.134 }, 00:13:29.134 { 00:13:29.134 "name": "BaseBdev2", 00:13:29.134 "uuid": "8edf4ec6-f28f-4891-93a9-c559be15c57d", 00:13:29.134 "is_configured": true, 00:13:29.134 "data_offset": 2048, 00:13:29.134 "data_size": 63488 00:13:29.134 }, 00:13:29.134 { 00:13:29.134 "name": "BaseBdev3", 00:13:29.134 "uuid": "a1085749-3cce-4e83-899d-07c91e281701", 00:13:29.134 "is_configured": true, 00:13:29.134 "data_offset": 2048, 00:13:29.134 "data_size": 63488 00:13:29.134 }, 00:13:29.134 { 00:13:29.134 "name": "BaseBdev4", 00:13:29.134 "uuid": "30213a61-4105-4eab-b84c-123e15ad030d", 00:13:29.134 "is_configured": true, 00:13:29.134 "data_offset": 2048, 00:13:29.134 "data_size": 63488 00:13:29.134 } 00:13:29.134 ] 00:13:29.134 } 00:13:29.134 } 00:13:29.134 }' 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:29.134 BaseBdev2 00:13:29.134 BaseBdev3 00:13:29.134 BaseBdev4' 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.134 04:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.396 [2024-12-06 04:04:22.593520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.396 "name": "Existed_Raid", 00:13:29.396 "uuid": "05f746c7-2eb1-47de-8aec-58b6234c9d26", 00:13:29.396 "strip_size_kb": 0, 00:13:29.396 "state": "online", 00:13:29.396 "raid_level": "raid1", 00:13:29.396 "superblock": true, 00:13:29.396 "num_base_bdevs": 4, 00:13:29.396 "num_base_bdevs_discovered": 3, 00:13:29.396 "num_base_bdevs_operational": 3, 00:13:29.396 "base_bdevs_list": [ 00:13:29.396 { 00:13:29.396 "name": null, 00:13:29.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.396 "is_configured": false, 00:13:29.396 "data_offset": 0, 00:13:29.396 "data_size": 63488 00:13:29.396 }, 00:13:29.396 { 00:13:29.396 "name": "BaseBdev2", 00:13:29.396 "uuid": "8edf4ec6-f28f-4891-93a9-c559be15c57d", 00:13:29.396 "is_configured": true, 00:13:29.396 "data_offset": 2048, 00:13:29.396 "data_size": 63488 00:13:29.396 }, 00:13:29.396 { 00:13:29.396 "name": "BaseBdev3", 00:13:29.396 "uuid": "a1085749-3cce-4e83-899d-07c91e281701", 00:13:29.396 "is_configured": true, 00:13:29.396 "data_offset": 2048, 00:13:29.396 "data_size": 63488 00:13:29.396 }, 00:13:29.396 { 00:13:29.396 "name": "BaseBdev4", 00:13:29.396 "uuid": "30213a61-4105-4eab-b84c-123e15ad030d", 00:13:29.396 "is_configured": true, 00:13:29.396 "data_offset": 2048, 00:13:29.396 "data_size": 63488 00:13:29.396 } 00:13:29.396 ] 00:13:29.396 }' 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.396 04:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.965 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:29.965 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:29.965 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.965 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:29.965 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.965 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.965 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.965 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:29.965 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:29.966 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:29.966 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.966 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.966 [2024-12-06 04:04:23.197461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:29.966 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.966 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:29.966 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:29.966 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.966 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:29.966 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.966 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.966 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.225 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:30.225 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:30.225 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:30.225 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.225 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.225 [2024-12-06 04:04:23.342058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:30.225 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.225 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:30.225 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:30.225 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.225 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.225 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.225 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:30.225 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.225 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:30.225 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:30.225 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:30.225 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.225 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.225 [2024-12-06 04:04:23.496929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:30.225 [2024-12-06 04:04:23.497080] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.484 [2024-12-06 04:04:23.593207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.484 [2024-12-06 04:04:23.593341] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:30.484 [2024-12-06 04:04:23.593358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:30.484 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.484 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:30.484 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:30.484 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.484 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:30.484 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.484 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.484 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.484 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.485 BaseBdev2 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.485 [ 00:13:30.485 { 00:13:30.485 "name": "BaseBdev2", 00:13:30.485 "aliases": [ 00:13:30.485 "4199df29-7b95-4f7b-a434-4157d11c89e0" 00:13:30.485 ], 00:13:30.485 "product_name": "Malloc disk", 00:13:30.485 "block_size": 512, 00:13:30.485 "num_blocks": 65536, 00:13:30.485 "uuid": "4199df29-7b95-4f7b-a434-4157d11c89e0", 00:13:30.485 "assigned_rate_limits": { 00:13:30.485 "rw_ios_per_sec": 0, 00:13:30.485 "rw_mbytes_per_sec": 0, 00:13:30.485 "r_mbytes_per_sec": 0, 00:13:30.485 "w_mbytes_per_sec": 0 00:13:30.485 }, 00:13:30.485 "claimed": false, 00:13:30.485 "zoned": false, 00:13:30.485 "supported_io_types": { 00:13:30.485 "read": true, 00:13:30.485 "write": true, 00:13:30.485 "unmap": true, 00:13:30.485 "flush": true, 00:13:30.485 "reset": true, 00:13:30.485 "nvme_admin": false, 00:13:30.485 "nvme_io": false, 00:13:30.485 "nvme_io_md": false, 00:13:30.485 "write_zeroes": true, 00:13:30.485 "zcopy": true, 00:13:30.485 "get_zone_info": false, 00:13:30.485 "zone_management": false, 00:13:30.485 "zone_append": false, 00:13:30.485 "compare": false, 00:13:30.485 "compare_and_write": false, 00:13:30.485 "abort": true, 00:13:30.485 "seek_hole": false, 00:13:30.485 "seek_data": false, 00:13:30.485 "copy": true, 00:13:30.485 "nvme_iov_md": false 00:13:30.485 }, 00:13:30.485 "memory_domains": [ 00:13:30.485 { 00:13:30.485 "dma_device_id": "system", 00:13:30.485 "dma_device_type": 1 00:13:30.485 }, 00:13:30.485 { 00:13:30.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.485 "dma_device_type": 2 00:13:30.485 } 00:13:30.485 ], 00:13:30.485 "driver_specific": {} 00:13:30.485 } 00:13:30.485 ] 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.485 BaseBdev3 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.485 [ 00:13:30.485 { 00:13:30.485 "name": "BaseBdev3", 00:13:30.485 "aliases": [ 00:13:30.485 "0e9f60d1-95b4-49ff-a535-dccbdd00be41" 00:13:30.485 ], 00:13:30.485 "product_name": "Malloc disk", 00:13:30.485 "block_size": 512, 00:13:30.485 "num_blocks": 65536, 00:13:30.485 "uuid": "0e9f60d1-95b4-49ff-a535-dccbdd00be41", 00:13:30.485 "assigned_rate_limits": { 00:13:30.485 "rw_ios_per_sec": 0, 00:13:30.485 "rw_mbytes_per_sec": 0, 00:13:30.485 "r_mbytes_per_sec": 0, 00:13:30.485 "w_mbytes_per_sec": 0 00:13:30.485 }, 00:13:30.485 "claimed": false, 00:13:30.485 "zoned": false, 00:13:30.485 "supported_io_types": { 00:13:30.485 "read": true, 00:13:30.485 "write": true, 00:13:30.485 "unmap": true, 00:13:30.485 "flush": true, 00:13:30.485 "reset": true, 00:13:30.485 "nvme_admin": false, 00:13:30.485 "nvme_io": false, 00:13:30.485 "nvme_io_md": false, 00:13:30.485 "write_zeroes": true, 00:13:30.485 "zcopy": true, 00:13:30.485 "get_zone_info": false, 00:13:30.485 "zone_management": false, 00:13:30.485 "zone_append": false, 00:13:30.485 "compare": false, 00:13:30.485 "compare_and_write": false, 00:13:30.485 "abort": true, 00:13:30.485 "seek_hole": false, 00:13:30.485 "seek_data": false, 00:13:30.485 "copy": true, 00:13:30.485 "nvme_iov_md": false 00:13:30.485 }, 00:13:30.485 "memory_domains": [ 00:13:30.485 { 00:13:30.485 "dma_device_id": "system", 00:13:30.485 "dma_device_type": 1 00:13:30.485 }, 00:13:30.485 { 00:13:30.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.485 "dma_device_type": 2 00:13:30.485 } 00:13:30.485 ], 00:13:30.485 "driver_specific": {} 00:13:30.485 } 00:13:30.485 ] 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.485 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.745 BaseBdev4 00:13:30.745 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.745 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:30.745 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:30.745 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:30.745 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:30.745 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:30.745 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:30.745 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:30.745 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.745 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.745 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.745 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:30.745 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.745 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.745 [ 00:13:30.745 { 00:13:30.745 "name": "BaseBdev4", 00:13:30.745 "aliases": [ 00:13:30.745 "709e669d-d126-4434-95c8-6ae1a06f58e3" 00:13:30.745 ], 00:13:30.745 "product_name": "Malloc disk", 00:13:30.745 "block_size": 512, 00:13:30.745 "num_blocks": 65536, 00:13:30.745 "uuid": "709e669d-d126-4434-95c8-6ae1a06f58e3", 00:13:30.745 "assigned_rate_limits": { 00:13:30.745 "rw_ios_per_sec": 0, 00:13:30.745 "rw_mbytes_per_sec": 0, 00:13:30.745 "r_mbytes_per_sec": 0, 00:13:30.745 "w_mbytes_per_sec": 0 00:13:30.745 }, 00:13:30.745 "claimed": false, 00:13:30.745 "zoned": false, 00:13:30.745 "supported_io_types": { 00:13:30.745 "read": true, 00:13:30.745 "write": true, 00:13:30.745 "unmap": true, 00:13:30.745 "flush": true, 00:13:30.745 "reset": true, 00:13:30.745 "nvme_admin": false, 00:13:30.745 "nvme_io": false, 00:13:30.745 "nvme_io_md": false, 00:13:30.745 "write_zeroes": true, 00:13:30.745 "zcopy": true, 00:13:30.745 "get_zone_info": false, 00:13:30.745 "zone_management": false, 00:13:30.745 "zone_append": false, 00:13:30.745 "compare": false, 00:13:30.745 "compare_and_write": false, 00:13:30.745 "abort": true, 00:13:30.745 "seek_hole": false, 00:13:30.745 "seek_data": false, 00:13:30.745 "copy": true, 00:13:30.745 "nvme_iov_md": false 00:13:30.745 }, 00:13:30.745 "memory_domains": [ 00:13:30.745 { 00:13:30.745 "dma_device_id": "system", 00:13:30.745 "dma_device_type": 1 00:13:30.745 }, 00:13:30.745 { 00:13:30.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.745 "dma_device_type": 2 00:13:30.745 } 00:13:30.745 ], 00:13:30.745 "driver_specific": {} 00:13:30.745 } 00:13:30.745 ] 00:13:30.745 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.745 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:30.745 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:30.745 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:30.745 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:30.745 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.745 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.745 [2024-12-06 04:04:23.887097] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:30.745 [2024-12-06 04:04:23.887184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:30.746 [2024-12-06 04:04:23.887224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:30.746 [2024-12-06 04:04:23.889033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:30.746 [2024-12-06 04:04:23.889136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:30.746 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.746 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:30.746 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.746 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.746 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.746 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.746 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:30.746 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.746 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.746 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.746 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.746 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.746 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.746 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.746 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.746 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.746 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.746 "name": "Existed_Raid", 00:13:30.746 "uuid": "b294b217-a0ab-462d-92f5-441a0dd0c404", 00:13:30.746 "strip_size_kb": 0, 00:13:30.746 "state": "configuring", 00:13:30.746 "raid_level": "raid1", 00:13:30.746 "superblock": true, 00:13:30.746 "num_base_bdevs": 4, 00:13:30.746 "num_base_bdevs_discovered": 3, 00:13:30.746 "num_base_bdevs_operational": 4, 00:13:30.746 "base_bdevs_list": [ 00:13:30.746 { 00:13:30.746 "name": "BaseBdev1", 00:13:30.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.746 "is_configured": false, 00:13:30.746 "data_offset": 0, 00:13:30.746 "data_size": 0 00:13:30.746 }, 00:13:30.746 { 00:13:30.746 "name": "BaseBdev2", 00:13:30.746 "uuid": "4199df29-7b95-4f7b-a434-4157d11c89e0", 00:13:30.746 "is_configured": true, 00:13:30.746 "data_offset": 2048, 00:13:30.746 "data_size": 63488 00:13:30.746 }, 00:13:30.746 { 00:13:30.746 "name": "BaseBdev3", 00:13:30.746 "uuid": "0e9f60d1-95b4-49ff-a535-dccbdd00be41", 00:13:30.746 "is_configured": true, 00:13:30.746 "data_offset": 2048, 00:13:30.746 "data_size": 63488 00:13:30.746 }, 00:13:30.746 { 00:13:30.746 "name": "BaseBdev4", 00:13:30.746 "uuid": "709e669d-d126-4434-95c8-6ae1a06f58e3", 00:13:30.746 "is_configured": true, 00:13:30.746 "data_offset": 2048, 00:13:30.746 "data_size": 63488 00:13:30.746 } 00:13:30.746 ] 00:13:30.746 }' 00:13:30.746 04:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.746 04:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.315 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:31.315 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.315 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.315 [2024-12-06 04:04:24.386244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:31.315 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.315 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:31.315 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.315 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.315 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.315 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.315 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:31.315 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.315 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.315 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.315 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.315 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.315 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.315 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.315 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.315 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.315 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.315 "name": "Existed_Raid", 00:13:31.315 "uuid": "b294b217-a0ab-462d-92f5-441a0dd0c404", 00:13:31.315 "strip_size_kb": 0, 00:13:31.315 "state": "configuring", 00:13:31.315 "raid_level": "raid1", 00:13:31.315 "superblock": true, 00:13:31.315 "num_base_bdevs": 4, 00:13:31.315 "num_base_bdevs_discovered": 2, 00:13:31.315 "num_base_bdevs_operational": 4, 00:13:31.315 "base_bdevs_list": [ 00:13:31.315 { 00:13:31.315 "name": "BaseBdev1", 00:13:31.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.315 "is_configured": false, 00:13:31.315 "data_offset": 0, 00:13:31.315 "data_size": 0 00:13:31.315 }, 00:13:31.315 { 00:13:31.315 "name": null, 00:13:31.315 "uuid": "4199df29-7b95-4f7b-a434-4157d11c89e0", 00:13:31.315 "is_configured": false, 00:13:31.315 "data_offset": 0, 00:13:31.315 "data_size": 63488 00:13:31.315 }, 00:13:31.315 { 00:13:31.315 "name": "BaseBdev3", 00:13:31.315 "uuid": "0e9f60d1-95b4-49ff-a535-dccbdd00be41", 00:13:31.315 "is_configured": true, 00:13:31.315 "data_offset": 2048, 00:13:31.315 "data_size": 63488 00:13:31.315 }, 00:13:31.315 { 00:13:31.315 "name": "BaseBdev4", 00:13:31.315 "uuid": "709e669d-d126-4434-95c8-6ae1a06f58e3", 00:13:31.315 "is_configured": true, 00:13:31.315 "data_offset": 2048, 00:13:31.315 "data_size": 63488 00:13:31.315 } 00:13:31.315 ] 00:13:31.315 }' 00:13:31.315 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.315 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.609 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:31.609 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.609 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.609 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.609 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.609 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:31.609 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:31.609 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.609 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.609 [2024-12-06 04:04:24.849389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:31.609 BaseBdev1 00:13:31.609 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.609 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:31.609 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:31.609 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:31.609 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:31.609 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:31.609 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:31.609 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:31.609 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.609 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.609 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.609 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:31.609 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.609 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.609 [ 00:13:31.609 { 00:13:31.609 "name": "BaseBdev1", 00:13:31.609 "aliases": [ 00:13:31.609 "eb394559-8630-436a-a7f8-203e248c854e" 00:13:31.609 ], 00:13:31.609 "product_name": "Malloc disk", 00:13:31.609 "block_size": 512, 00:13:31.609 "num_blocks": 65536, 00:13:31.609 "uuid": "eb394559-8630-436a-a7f8-203e248c854e", 00:13:31.609 "assigned_rate_limits": { 00:13:31.609 "rw_ios_per_sec": 0, 00:13:31.609 "rw_mbytes_per_sec": 0, 00:13:31.609 "r_mbytes_per_sec": 0, 00:13:31.609 "w_mbytes_per_sec": 0 00:13:31.609 }, 00:13:31.609 "claimed": true, 00:13:31.609 "claim_type": "exclusive_write", 00:13:31.609 "zoned": false, 00:13:31.610 "supported_io_types": { 00:13:31.610 "read": true, 00:13:31.610 "write": true, 00:13:31.610 "unmap": true, 00:13:31.610 "flush": true, 00:13:31.610 "reset": true, 00:13:31.610 "nvme_admin": false, 00:13:31.610 "nvme_io": false, 00:13:31.610 "nvme_io_md": false, 00:13:31.610 "write_zeroes": true, 00:13:31.610 "zcopy": true, 00:13:31.610 "get_zone_info": false, 00:13:31.610 "zone_management": false, 00:13:31.610 "zone_append": false, 00:13:31.610 "compare": false, 00:13:31.610 "compare_and_write": false, 00:13:31.610 "abort": true, 00:13:31.610 "seek_hole": false, 00:13:31.610 "seek_data": false, 00:13:31.610 "copy": true, 00:13:31.610 "nvme_iov_md": false 00:13:31.610 }, 00:13:31.610 "memory_domains": [ 00:13:31.610 { 00:13:31.610 "dma_device_id": "system", 00:13:31.610 "dma_device_type": 1 00:13:31.610 }, 00:13:31.610 { 00:13:31.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.610 "dma_device_type": 2 00:13:31.610 } 00:13:31.610 ], 00:13:31.610 "driver_specific": {} 00:13:31.610 } 00:13:31.610 ] 00:13:31.610 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.610 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:31.610 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:31.610 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.610 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.610 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.610 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.610 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:31.610 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.610 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.610 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.610 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.610 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.610 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.610 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.610 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.610 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.610 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.610 "name": "Existed_Raid", 00:13:31.610 "uuid": "b294b217-a0ab-462d-92f5-441a0dd0c404", 00:13:31.610 "strip_size_kb": 0, 00:13:31.610 "state": "configuring", 00:13:31.610 "raid_level": "raid1", 00:13:31.610 "superblock": true, 00:13:31.610 "num_base_bdevs": 4, 00:13:31.610 "num_base_bdevs_discovered": 3, 00:13:31.610 "num_base_bdevs_operational": 4, 00:13:31.610 "base_bdevs_list": [ 00:13:31.610 { 00:13:31.610 "name": "BaseBdev1", 00:13:31.610 "uuid": "eb394559-8630-436a-a7f8-203e248c854e", 00:13:31.610 "is_configured": true, 00:13:31.610 "data_offset": 2048, 00:13:31.610 "data_size": 63488 00:13:31.610 }, 00:13:31.610 { 00:13:31.610 "name": null, 00:13:31.610 "uuid": "4199df29-7b95-4f7b-a434-4157d11c89e0", 00:13:31.610 "is_configured": false, 00:13:31.610 "data_offset": 0, 00:13:31.610 "data_size": 63488 00:13:31.610 }, 00:13:31.610 { 00:13:31.610 "name": "BaseBdev3", 00:13:31.610 "uuid": "0e9f60d1-95b4-49ff-a535-dccbdd00be41", 00:13:31.610 "is_configured": true, 00:13:31.610 "data_offset": 2048, 00:13:31.610 "data_size": 63488 00:13:31.610 }, 00:13:31.610 { 00:13:31.610 "name": "BaseBdev4", 00:13:31.610 "uuid": "709e669d-d126-4434-95c8-6ae1a06f58e3", 00:13:31.610 "is_configured": true, 00:13:31.610 "data_offset": 2048, 00:13:31.610 "data_size": 63488 00:13:31.610 } 00:13:31.610 ] 00:13:31.610 }' 00:13:31.610 04:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.610 04:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.176 [2024-12-06 04:04:25.360644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.176 "name": "Existed_Raid", 00:13:32.176 "uuid": "b294b217-a0ab-462d-92f5-441a0dd0c404", 00:13:32.176 "strip_size_kb": 0, 00:13:32.176 "state": "configuring", 00:13:32.176 "raid_level": "raid1", 00:13:32.176 "superblock": true, 00:13:32.176 "num_base_bdevs": 4, 00:13:32.176 "num_base_bdevs_discovered": 2, 00:13:32.176 "num_base_bdevs_operational": 4, 00:13:32.176 "base_bdevs_list": [ 00:13:32.176 { 00:13:32.176 "name": "BaseBdev1", 00:13:32.176 "uuid": "eb394559-8630-436a-a7f8-203e248c854e", 00:13:32.176 "is_configured": true, 00:13:32.176 "data_offset": 2048, 00:13:32.176 "data_size": 63488 00:13:32.176 }, 00:13:32.176 { 00:13:32.176 "name": null, 00:13:32.176 "uuid": "4199df29-7b95-4f7b-a434-4157d11c89e0", 00:13:32.176 "is_configured": false, 00:13:32.176 "data_offset": 0, 00:13:32.176 "data_size": 63488 00:13:32.176 }, 00:13:32.176 { 00:13:32.176 "name": null, 00:13:32.176 "uuid": "0e9f60d1-95b4-49ff-a535-dccbdd00be41", 00:13:32.176 "is_configured": false, 00:13:32.176 "data_offset": 0, 00:13:32.176 "data_size": 63488 00:13:32.176 }, 00:13:32.176 { 00:13:32.176 "name": "BaseBdev4", 00:13:32.176 "uuid": "709e669d-d126-4434-95c8-6ae1a06f58e3", 00:13:32.176 "is_configured": true, 00:13:32.176 "data_offset": 2048, 00:13:32.176 "data_size": 63488 00:13:32.176 } 00:13:32.176 ] 00:13:32.176 }' 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.176 04:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.434 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.434 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:32.434 04:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.434 04:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.691 [2024-12-06 04:04:25.823806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.691 "name": "Existed_Raid", 00:13:32.691 "uuid": "b294b217-a0ab-462d-92f5-441a0dd0c404", 00:13:32.691 "strip_size_kb": 0, 00:13:32.691 "state": "configuring", 00:13:32.691 "raid_level": "raid1", 00:13:32.691 "superblock": true, 00:13:32.691 "num_base_bdevs": 4, 00:13:32.691 "num_base_bdevs_discovered": 3, 00:13:32.691 "num_base_bdevs_operational": 4, 00:13:32.691 "base_bdevs_list": [ 00:13:32.691 { 00:13:32.691 "name": "BaseBdev1", 00:13:32.691 "uuid": "eb394559-8630-436a-a7f8-203e248c854e", 00:13:32.691 "is_configured": true, 00:13:32.691 "data_offset": 2048, 00:13:32.691 "data_size": 63488 00:13:32.691 }, 00:13:32.691 { 00:13:32.691 "name": null, 00:13:32.691 "uuid": "4199df29-7b95-4f7b-a434-4157d11c89e0", 00:13:32.691 "is_configured": false, 00:13:32.691 "data_offset": 0, 00:13:32.691 "data_size": 63488 00:13:32.691 }, 00:13:32.691 { 00:13:32.691 "name": "BaseBdev3", 00:13:32.691 "uuid": "0e9f60d1-95b4-49ff-a535-dccbdd00be41", 00:13:32.691 "is_configured": true, 00:13:32.691 "data_offset": 2048, 00:13:32.691 "data_size": 63488 00:13:32.691 }, 00:13:32.691 { 00:13:32.691 "name": "BaseBdev4", 00:13:32.691 "uuid": "709e669d-d126-4434-95c8-6ae1a06f58e3", 00:13:32.691 "is_configured": true, 00:13:32.691 "data_offset": 2048, 00:13:32.691 "data_size": 63488 00:13:32.691 } 00:13:32.691 ] 00:13:32.691 }' 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.691 04:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.971 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.971 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:32.971 04:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.971 04:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.971 04:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.971 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:32.971 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:32.971 04:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.971 04:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.971 [2024-12-06 04:04:26.299090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:33.229 04:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.229 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:33.229 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.229 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.229 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.229 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.229 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.229 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.229 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.229 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.229 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.229 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.229 04:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.229 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.229 04:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.229 04:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.229 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.229 "name": "Existed_Raid", 00:13:33.229 "uuid": "b294b217-a0ab-462d-92f5-441a0dd0c404", 00:13:33.229 "strip_size_kb": 0, 00:13:33.229 "state": "configuring", 00:13:33.229 "raid_level": "raid1", 00:13:33.229 "superblock": true, 00:13:33.229 "num_base_bdevs": 4, 00:13:33.229 "num_base_bdevs_discovered": 2, 00:13:33.229 "num_base_bdevs_operational": 4, 00:13:33.229 "base_bdevs_list": [ 00:13:33.229 { 00:13:33.229 "name": null, 00:13:33.229 "uuid": "eb394559-8630-436a-a7f8-203e248c854e", 00:13:33.229 "is_configured": false, 00:13:33.229 "data_offset": 0, 00:13:33.229 "data_size": 63488 00:13:33.229 }, 00:13:33.229 { 00:13:33.229 "name": null, 00:13:33.229 "uuid": "4199df29-7b95-4f7b-a434-4157d11c89e0", 00:13:33.229 "is_configured": false, 00:13:33.229 "data_offset": 0, 00:13:33.229 "data_size": 63488 00:13:33.229 }, 00:13:33.229 { 00:13:33.229 "name": "BaseBdev3", 00:13:33.229 "uuid": "0e9f60d1-95b4-49ff-a535-dccbdd00be41", 00:13:33.229 "is_configured": true, 00:13:33.229 "data_offset": 2048, 00:13:33.229 "data_size": 63488 00:13:33.229 }, 00:13:33.229 { 00:13:33.229 "name": "BaseBdev4", 00:13:33.229 "uuid": "709e669d-d126-4434-95c8-6ae1a06f58e3", 00:13:33.229 "is_configured": true, 00:13:33.229 "data_offset": 2048, 00:13:33.229 "data_size": 63488 00:13:33.229 } 00:13:33.229 ] 00:13:33.229 }' 00:13:33.229 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.229 04:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.488 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:33.488 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.488 04:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.488 04:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.488 04:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.488 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:33.488 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:33.488 04:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.488 04:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.747 [2024-12-06 04:04:26.842066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:33.748 04:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.748 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:33.748 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.748 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.748 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.748 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.748 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.748 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.748 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.748 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.748 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.748 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.748 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.748 04:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.748 04:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.748 04:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.748 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.748 "name": "Existed_Raid", 00:13:33.748 "uuid": "b294b217-a0ab-462d-92f5-441a0dd0c404", 00:13:33.748 "strip_size_kb": 0, 00:13:33.748 "state": "configuring", 00:13:33.748 "raid_level": "raid1", 00:13:33.748 "superblock": true, 00:13:33.748 "num_base_bdevs": 4, 00:13:33.748 "num_base_bdevs_discovered": 3, 00:13:33.748 "num_base_bdevs_operational": 4, 00:13:33.748 "base_bdevs_list": [ 00:13:33.748 { 00:13:33.748 "name": null, 00:13:33.748 "uuid": "eb394559-8630-436a-a7f8-203e248c854e", 00:13:33.748 "is_configured": false, 00:13:33.748 "data_offset": 0, 00:13:33.748 "data_size": 63488 00:13:33.748 }, 00:13:33.748 { 00:13:33.748 "name": "BaseBdev2", 00:13:33.748 "uuid": "4199df29-7b95-4f7b-a434-4157d11c89e0", 00:13:33.748 "is_configured": true, 00:13:33.748 "data_offset": 2048, 00:13:33.748 "data_size": 63488 00:13:33.748 }, 00:13:33.748 { 00:13:33.748 "name": "BaseBdev3", 00:13:33.748 "uuid": "0e9f60d1-95b4-49ff-a535-dccbdd00be41", 00:13:33.748 "is_configured": true, 00:13:33.748 "data_offset": 2048, 00:13:33.748 "data_size": 63488 00:13:33.748 }, 00:13:33.748 { 00:13:33.748 "name": "BaseBdev4", 00:13:33.748 "uuid": "709e669d-d126-4434-95c8-6ae1a06f58e3", 00:13:33.748 "is_configured": true, 00:13:33.748 "data_offset": 2048, 00:13:33.748 "data_size": 63488 00:13:33.748 } 00:13:33.748 ] 00:13:33.748 }' 00:13:33.748 04:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.748 04:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.008 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.008 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.008 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.008 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:34.008 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.008 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:34.008 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:34.008 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.008 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.008 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.268 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u eb394559-8630-436a-a7f8-203e248c854e 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.269 [2024-12-06 04:04:27.443033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:34.269 [2024-12-06 04:04:27.443453] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:34.269 [2024-12-06 04:04:27.443534] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:34.269 [2024-12-06 04:04:27.443924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:34.269 NewBaseBdev 00:13:34.269 [2024-12-06 04:04:27.444218] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:34.269 [2024-12-06 04:04:27.444245] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:34.269 [2024-12-06 04:04:27.444454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.269 [ 00:13:34.269 { 00:13:34.269 "name": "NewBaseBdev", 00:13:34.269 "aliases": [ 00:13:34.269 "eb394559-8630-436a-a7f8-203e248c854e" 00:13:34.269 ], 00:13:34.269 "product_name": "Malloc disk", 00:13:34.269 "block_size": 512, 00:13:34.269 "num_blocks": 65536, 00:13:34.269 "uuid": "eb394559-8630-436a-a7f8-203e248c854e", 00:13:34.269 "assigned_rate_limits": { 00:13:34.269 "rw_ios_per_sec": 0, 00:13:34.269 "rw_mbytes_per_sec": 0, 00:13:34.269 "r_mbytes_per_sec": 0, 00:13:34.269 "w_mbytes_per_sec": 0 00:13:34.269 }, 00:13:34.269 "claimed": true, 00:13:34.269 "claim_type": "exclusive_write", 00:13:34.269 "zoned": false, 00:13:34.269 "supported_io_types": { 00:13:34.269 "read": true, 00:13:34.269 "write": true, 00:13:34.269 "unmap": true, 00:13:34.269 "flush": true, 00:13:34.269 "reset": true, 00:13:34.269 "nvme_admin": false, 00:13:34.269 "nvme_io": false, 00:13:34.269 "nvme_io_md": false, 00:13:34.269 "write_zeroes": true, 00:13:34.269 "zcopy": true, 00:13:34.269 "get_zone_info": false, 00:13:34.269 "zone_management": false, 00:13:34.269 "zone_append": false, 00:13:34.269 "compare": false, 00:13:34.269 "compare_and_write": false, 00:13:34.269 "abort": true, 00:13:34.269 "seek_hole": false, 00:13:34.269 "seek_data": false, 00:13:34.269 "copy": true, 00:13:34.269 "nvme_iov_md": false 00:13:34.269 }, 00:13:34.269 "memory_domains": [ 00:13:34.269 { 00:13:34.269 "dma_device_id": "system", 00:13:34.269 "dma_device_type": 1 00:13:34.269 }, 00:13:34.269 { 00:13:34.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.269 "dma_device_type": 2 00:13:34.269 } 00:13:34.269 ], 00:13:34.269 "driver_specific": {} 00:13:34.269 } 00:13:34.269 ] 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.269 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.269 "name": "Existed_Raid", 00:13:34.269 "uuid": "b294b217-a0ab-462d-92f5-441a0dd0c404", 00:13:34.269 "strip_size_kb": 0, 00:13:34.269 "state": "online", 00:13:34.269 "raid_level": "raid1", 00:13:34.269 "superblock": true, 00:13:34.269 "num_base_bdevs": 4, 00:13:34.269 "num_base_bdevs_discovered": 4, 00:13:34.269 "num_base_bdevs_operational": 4, 00:13:34.269 "base_bdevs_list": [ 00:13:34.269 { 00:13:34.269 "name": "NewBaseBdev", 00:13:34.269 "uuid": "eb394559-8630-436a-a7f8-203e248c854e", 00:13:34.269 "is_configured": true, 00:13:34.269 "data_offset": 2048, 00:13:34.269 "data_size": 63488 00:13:34.269 }, 00:13:34.269 { 00:13:34.269 "name": "BaseBdev2", 00:13:34.269 "uuid": "4199df29-7b95-4f7b-a434-4157d11c89e0", 00:13:34.269 "is_configured": true, 00:13:34.269 "data_offset": 2048, 00:13:34.270 "data_size": 63488 00:13:34.270 }, 00:13:34.270 { 00:13:34.270 "name": "BaseBdev3", 00:13:34.270 "uuid": "0e9f60d1-95b4-49ff-a535-dccbdd00be41", 00:13:34.270 "is_configured": true, 00:13:34.270 "data_offset": 2048, 00:13:34.270 "data_size": 63488 00:13:34.270 }, 00:13:34.270 { 00:13:34.270 "name": "BaseBdev4", 00:13:34.270 "uuid": "709e669d-d126-4434-95c8-6ae1a06f58e3", 00:13:34.270 "is_configured": true, 00:13:34.270 "data_offset": 2048, 00:13:34.270 "data_size": 63488 00:13:34.270 } 00:13:34.270 ] 00:13:34.270 }' 00:13:34.270 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.270 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.838 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:34.838 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:34.838 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:34.838 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:34.838 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:34.838 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:34.838 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:34.838 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:34.838 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.838 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.838 [2024-12-06 04:04:27.962556] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:34.838 04:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.838 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:34.838 "name": "Existed_Raid", 00:13:34.838 "aliases": [ 00:13:34.838 "b294b217-a0ab-462d-92f5-441a0dd0c404" 00:13:34.838 ], 00:13:34.838 "product_name": "Raid Volume", 00:13:34.838 "block_size": 512, 00:13:34.838 "num_blocks": 63488, 00:13:34.838 "uuid": "b294b217-a0ab-462d-92f5-441a0dd0c404", 00:13:34.838 "assigned_rate_limits": { 00:13:34.838 "rw_ios_per_sec": 0, 00:13:34.838 "rw_mbytes_per_sec": 0, 00:13:34.838 "r_mbytes_per_sec": 0, 00:13:34.838 "w_mbytes_per_sec": 0 00:13:34.838 }, 00:13:34.838 "claimed": false, 00:13:34.838 "zoned": false, 00:13:34.838 "supported_io_types": { 00:13:34.838 "read": true, 00:13:34.838 "write": true, 00:13:34.838 "unmap": false, 00:13:34.838 "flush": false, 00:13:34.838 "reset": true, 00:13:34.838 "nvme_admin": false, 00:13:34.838 "nvme_io": false, 00:13:34.838 "nvme_io_md": false, 00:13:34.838 "write_zeroes": true, 00:13:34.838 "zcopy": false, 00:13:34.838 "get_zone_info": false, 00:13:34.838 "zone_management": false, 00:13:34.838 "zone_append": false, 00:13:34.838 "compare": false, 00:13:34.838 "compare_and_write": false, 00:13:34.838 "abort": false, 00:13:34.838 "seek_hole": false, 00:13:34.838 "seek_data": false, 00:13:34.838 "copy": false, 00:13:34.838 "nvme_iov_md": false 00:13:34.838 }, 00:13:34.838 "memory_domains": [ 00:13:34.838 { 00:13:34.838 "dma_device_id": "system", 00:13:34.838 "dma_device_type": 1 00:13:34.838 }, 00:13:34.838 { 00:13:34.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.838 "dma_device_type": 2 00:13:34.838 }, 00:13:34.838 { 00:13:34.838 "dma_device_id": "system", 00:13:34.838 "dma_device_type": 1 00:13:34.838 }, 00:13:34.838 { 00:13:34.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.838 "dma_device_type": 2 00:13:34.839 }, 00:13:34.839 { 00:13:34.839 "dma_device_id": "system", 00:13:34.839 "dma_device_type": 1 00:13:34.839 }, 00:13:34.839 { 00:13:34.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.839 "dma_device_type": 2 00:13:34.839 }, 00:13:34.839 { 00:13:34.839 "dma_device_id": "system", 00:13:34.839 "dma_device_type": 1 00:13:34.839 }, 00:13:34.839 { 00:13:34.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.839 "dma_device_type": 2 00:13:34.839 } 00:13:34.839 ], 00:13:34.839 "driver_specific": { 00:13:34.839 "raid": { 00:13:34.839 "uuid": "b294b217-a0ab-462d-92f5-441a0dd0c404", 00:13:34.839 "strip_size_kb": 0, 00:13:34.839 "state": "online", 00:13:34.839 "raid_level": "raid1", 00:13:34.839 "superblock": true, 00:13:34.839 "num_base_bdevs": 4, 00:13:34.839 "num_base_bdevs_discovered": 4, 00:13:34.839 "num_base_bdevs_operational": 4, 00:13:34.839 "base_bdevs_list": [ 00:13:34.839 { 00:13:34.839 "name": "NewBaseBdev", 00:13:34.839 "uuid": "eb394559-8630-436a-a7f8-203e248c854e", 00:13:34.839 "is_configured": true, 00:13:34.839 "data_offset": 2048, 00:13:34.839 "data_size": 63488 00:13:34.839 }, 00:13:34.839 { 00:13:34.839 "name": "BaseBdev2", 00:13:34.839 "uuid": "4199df29-7b95-4f7b-a434-4157d11c89e0", 00:13:34.839 "is_configured": true, 00:13:34.839 "data_offset": 2048, 00:13:34.839 "data_size": 63488 00:13:34.839 }, 00:13:34.839 { 00:13:34.839 "name": "BaseBdev3", 00:13:34.839 "uuid": "0e9f60d1-95b4-49ff-a535-dccbdd00be41", 00:13:34.839 "is_configured": true, 00:13:34.839 "data_offset": 2048, 00:13:34.839 "data_size": 63488 00:13:34.839 }, 00:13:34.839 { 00:13:34.839 "name": "BaseBdev4", 00:13:34.839 "uuid": "709e669d-d126-4434-95c8-6ae1a06f58e3", 00:13:34.839 "is_configured": true, 00:13:34.839 "data_offset": 2048, 00:13:34.839 "data_size": 63488 00:13:34.839 } 00:13:34.839 ] 00:13:34.839 } 00:13:34.839 } 00:13:34.839 }' 00:13:34.839 04:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:34.839 BaseBdev2 00:13:34.839 BaseBdev3 00:13:34.839 BaseBdev4' 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.839 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.098 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.099 [2024-12-06 04:04:28.257610] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:35.099 [2024-12-06 04:04:28.257637] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:35.099 [2024-12-06 04:04:28.257706] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.099 [2024-12-06 04:04:28.257986] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:35.099 [2024-12-06 04:04:28.257999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73981 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73981 ']' 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73981 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73981 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:35.099 killing process with pid 73981 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73981' 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73981 00:13:35.099 [2024-12-06 04:04:28.303565] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:35.099 04:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73981 00:13:35.358 [2024-12-06 04:04:28.684484] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:36.812 04:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:36.812 ************************************ 00:13:36.812 END TEST raid_state_function_test_sb 00:13:36.812 00:13:36.812 real 0m11.617s 00:13:36.812 user 0m18.319s 00:13:36.812 sys 0m2.250s 00:13:36.812 04:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.812 04:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.812 ************************************ 00:13:36.812 04:04:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:13:36.812 04:04:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:36.812 04:04:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.812 04:04:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:36.812 ************************************ 00:13:36.812 START TEST raid_superblock_test 00:13:36.812 ************************************ 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74652 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74652 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74652 ']' 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:36.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:36.812 04:04:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.812 [2024-12-06 04:04:29.928330] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:13:36.812 [2024-12-06 04:04:29.928552] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74652 ] 00:13:36.812 [2024-12-06 04:04:30.080760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.083 [2024-12-06 04:04:30.194119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.083 [2024-12-06 04:04:30.396930] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:37.083 [2024-12-06 04:04:30.397077] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.650 malloc1 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.650 [2024-12-06 04:04:30.834288] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:37.650 [2024-12-06 04:04:30.834425] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.650 [2024-12-06 04:04:30.834506] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:37.650 [2024-12-06 04:04:30.834562] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.650 [2024-12-06 04:04:30.837214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.650 [2024-12-06 04:04:30.837314] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:37.650 pt1 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.650 04:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.650 malloc2 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.651 [2024-12-06 04:04:30.894094] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:37.651 [2024-12-06 04:04:30.894188] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.651 [2024-12-06 04:04:30.894233] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:37.651 [2024-12-06 04:04:30.894261] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.651 [2024-12-06 04:04:30.896330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.651 [2024-12-06 04:04:30.896407] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:37.651 pt2 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.651 malloc3 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.651 [2024-12-06 04:04:30.961743] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:37.651 [2024-12-06 04:04:30.961842] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.651 [2024-12-06 04:04:30.961881] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:37.651 [2024-12-06 04:04:30.961910] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.651 [2024-12-06 04:04:30.964115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.651 [2024-12-06 04:04:30.964183] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:37.651 pt3 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.651 04:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.910 malloc4 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.910 [2024-12-06 04:04:31.021756] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:37.910 [2024-12-06 04:04:31.021856] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.910 [2024-12-06 04:04:31.021895] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:37.910 [2024-12-06 04:04:31.021924] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.910 [2024-12-06 04:04:31.023983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.910 [2024-12-06 04:04:31.024060] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:37.910 pt4 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.910 [2024-12-06 04:04:31.033760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:37.910 [2024-12-06 04:04:31.035528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:37.910 [2024-12-06 04:04:31.035624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:37.910 [2024-12-06 04:04:31.035717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:37.910 [2024-12-06 04:04:31.035930] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:37.910 [2024-12-06 04:04:31.035978] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:37.910 [2024-12-06 04:04:31.036265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:37.910 [2024-12-06 04:04:31.036491] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:37.910 [2024-12-06 04:04:31.036541] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:37.910 [2024-12-06 04:04:31.036741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.910 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.911 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.911 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.911 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.911 "name": "raid_bdev1", 00:13:37.911 "uuid": "3aa91d68-16de-4600-bd42-46891854259f", 00:13:37.911 "strip_size_kb": 0, 00:13:37.911 "state": "online", 00:13:37.911 "raid_level": "raid1", 00:13:37.911 "superblock": true, 00:13:37.911 "num_base_bdevs": 4, 00:13:37.911 "num_base_bdevs_discovered": 4, 00:13:37.911 "num_base_bdevs_operational": 4, 00:13:37.911 "base_bdevs_list": [ 00:13:37.911 { 00:13:37.911 "name": "pt1", 00:13:37.911 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:37.911 "is_configured": true, 00:13:37.911 "data_offset": 2048, 00:13:37.911 "data_size": 63488 00:13:37.911 }, 00:13:37.911 { 00:13:37.911 "name": "pt2", 00:13:37.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:37.911 "is_configured": true, 00:13:37.911 "data_offset": 2048, 00:13:37.911 "data_size": 63488 00:13:37.911 }, 00:13:37.911 { 00:13:37.911 "name": "pt3", 00:13:37.911 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:37.911 "is_configured": true, 00:13:37.911 "data_offset": 2048, 00:13:37.911 "data_size": 63488 00:13:37.911 }, 00:13:37.911 { 00:13:37.911 "name": "pt4", 00:13:37.911 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:37.911 "is_configured": true, 00:13:37.911 "data_offset": 2048, 00:13:37.911 "data_size": 63488 00:13:37.911 } 00:13:37.911 ] 00:13:37.911 }' 00:13:37.911 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.911 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.169 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:38.169 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:38.169 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:38.169 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:38.169 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:38.169 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:38.169 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:38.169 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.169 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.169 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:38.169 [2024-12-06 04:04:31.429531] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:38.169 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.169 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:38.169 "name": "raid_bdev1", 00:13:38.169 "aliases": [ 00:13:38.169 "3aa91d68-16de-4600-bd42-46891854259f" 00:13:38.169 ], 00:13:38.169 "product_name": "Raid Volume", 00:13:38.169 "block_size": 512, 00:13:38.169 "num_blocks": 63488, 00:13:38.169 "uuid": "3aa91d68-16de-4600-bd42-46891854259f", 00:13:38.169 "assigned_rate_limits": { 00:13:38.169 "rw_ios_per_sec": 0, 00:13:38.169 "rw_mbytes_per_sec": 0, 00:13:38.169 "r_mbytes_per_sec": 0, 00:13:38.169 "w_mbytes_per_sec": 0 00:13:38.169 }, 00:13:38.169 "claimed": false, 00:13:38.169 "zoned": false, 00:13:38.169 "supported_io_types": { 00:13:38.169 "read": true, 00:13:38.169 "write": true, 00:13:38.169 "unmap": false, 00:13:38.169 "flush": false, 00:13:38.169 "reset": true, 00:13:38.169 "nvme_admin": false, 00:13:38.169 "nvme_io": false, 00:13:38.169 "nvme_io_md": false, 00:13:38.169 "write_zeroes": true, 00:13:38.169 "zcopy": false, 00:13:38.169 "get_zone_info": false, 00:13:38.169 "zone_management": false, 00:13:38.169 "zone_append": false, 00:13:38.169 "compare": false, 00:13:38.169 "compare_and_write": false, 00:13:38.169 "abort": false, 00:13:38.169 "seek_hole": false, 00:13:38.169 "seek_data": false, 00:13:38.169 "copy": false, 00:13:38.169 "nvme_iov_md": false 00:13:38.169 }, 00:13:38.169 "memory_domains": [ 00:13:38.169 { 00:13:38.170 "dma_device_id": "system", 00:13:38.170 "dma_device_type": 1 00:13:38.170 }, 00:13:38.170 { 00:13:38.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.170 "dma_device_type": 2 00:13:38.170 }, 00:13:38.170 { 00:13:38.170 "dma_device_id": "system", 00:13:38.170 "dma_device_type": 1 00:13:38.170 }, 00:13:38.170 { 00:13:38.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.170 "dma_device_type": 2 00:13:38.170 }, 00:13:38.170 { 00:13:38.170 "dma_device_id": "system", 00:13:38.170 "dma_device_type": 1 00:13:38.170 }, 00:13:38.170 { 00:13:38.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.170 "dma_device_type": 2 00:13:38.170 }, 00:13:38.170 { 00:13:38.170 "dma_device_id": "system", 00:13:38.170 "dma_device_type": 1 00:13:38.170 }, 00:13:38.170 { 00:13:38.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.170 "dma_device_type": 2 00:13:38.170 } 00:13:38.170 ], 00:13:38.170 "driver_specific": { 00:13:38.170 "raid": { 00:13:38.170 "uuid": "3aa91d68-16de-4600-bd42-46891854259f", 00:13:38.170 "strip_size_kb": 0, 00:13:38.170 "state": "online", 00:13:38.170 "raid_level": "raid1", 00:13:38.170 "superblock": true, 00:13:38.170 "num_base_bdevs": 4, 00:13:38.170 "num_base_bdevs_discovered": 4, 00:13:38.170 "num_base_bdevs_operational": 4, 00:13:38.170 "base_bdevs_list": [ 00:13:38.170 { 00:13:38.170 "name": "pt1", 00:13:38.170 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:38.170 "is_configured": true, 00:13:38.170 "data_offset": 2048, 00:13:38.170 "data_size": 63488 00:13:38.170 }, 00:13:38.170 { 00:13:38.170 "name": "pt2", 00:13:38.170 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:38.170 "is_configured": true, 00:13:38.170 "data_offset": 2048, 00:13:38.170 "data_size": 63488 00:13:38.170 }, 00:13:38.170 { 00:13:38.170 "name": "pt3", 00:13:38.170 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:38.170 "is_configured": true, 00:13:38.170 "data_offset": 2048, 00:13:38.170 "data_size": 63488 00:13:38.170 }, 00:13:38.170 { 00:13:38.170 "name": "pt4", 00:13:38.170 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:38.170 "is_configured": true, 00:13:38.170 "data_offset": 2048, 00:13:38.170 "data_size": 63488 00:13:38.170 } 00:13:38.170 ] 00:13:38.170 } 00:13:38.170 } 00:13:38.170 }' 00:13:38.170 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:38.170 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:38.170 pt2 00:13:38.170 pt3 00:13:38.170 pt4' 00:13:38.170 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.429 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:38.429 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.429 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.429 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:38.429 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.429 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.429 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.429 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.429 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.429 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.429 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:38.429 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.429 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.429 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.429 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.429 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.429 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.429 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.429 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.429 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:38.429 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.429 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.430 [2024-12-06 04:04:31.728825] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3aa91d68-16de-4600-bd42-46891854259f 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3aa91d68-16de-4600-bd42-46891854259f ']' 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.430 [2024-12-06 04:04:31.760480] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:38.430 [2024-12-06 04:04:31.760501] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:38.430 [2024-12-06 04:04:31.760570] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:38.430 [2024-12-06 04:04:31.760664] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:38.430 [2024-12-06 04:04:31.760681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:38.430 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.690 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.690 [2024-12-06 04:04:31.928265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:38.690 [2024-12-06 04:04:31.930283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:38.690 [2024-12-06 04:04:31.930376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:38.690 [2024-12-06 04:04:31.930451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:38.690 [2024-12-06 04:04:31.930564] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:38.690 [2024-12-06 04:04:31.930667] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:38.690 [2024-12-06 04:04:31.930727] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:38.691 [2024-12-06 04:04:31.930792] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:38.691 [2024-12-06 04:04:31.930842] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:38.691 [2024-12-06 04:04:31.930876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:38.691 request: 00:13:38.691 { 00:13:38.691 "name": "raid_bdev1", 00:13:38.691 "raid_level": "raid1", 00:13:38.691 "base_bdevs": [ 00:13:38.691 "malloc1", 00:13:38.691 "malloc2", 00:13:38.691 "malloc3", 00:13:38.691 "malloc4" 00:13:38.691 ], 00:13:38.691 "superblock": false, 00:13:38.691 "method": "bdev_raid_create", 00:13:38.691 "req_id": 1 00:13:38.691 } 00:13:38.691 Got JSON-RPC error response 00:13:38.691 response: 00:13:38.691 { 00:13:38.691 "code": -17, 00:13:38.691 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:38.691 } 00:13:38.691 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:38.691 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:38.691 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:38.691 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:38.691 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:38.691 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:38.691 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.691 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.691 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.691 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.691 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:38.691 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:38.691 04:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:38.691 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.691 04:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.691 [2024-12-06 04:04:32.000123] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:38.691 [2024-12-06 04:04:32.000227] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.691 [2024-12-06 04:04:32.000262] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:38.691 [2024-12-06 04:04:32.000297] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.691 [2024-12-06 04:04:32.002596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.691 [2024-12-06 04:04:32.002695] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:38.691 [2024-12-06 04:04:32.002812] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:38.691 [2024-12-06 04:04:32.002899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:38.691 pt1 00:13:38.691 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.691 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:38.691 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.691 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.691 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.691 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.691 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.691 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.691 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.691 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.691 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.691 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.691 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.691 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.691 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.691 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.691 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.691 "name": "raid_bdev1", 00:13:38.691 "uuid": "3aa91d68-16de-4600-bd42-46891854259f", 00:13:38.691 "strip_size_kb": 0, 00:13:38.691 "state": "configuring", 00:13:38.691 "raid_level": "raid1", 00:13:38.691 "superblock": true, 00:13:38.691 "num_base_bdevs": 4, 00:13:38.691 "num_base_bdevs_discovered": 1, 00:13:38.691 "num_base_bdevs_operational": 4, 00:13:38.691 "base_bdevs_list": [ 00:13:38.691 { 00:13:38.691 "name": "pt1", 00:13:38.691 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:38.691 "is_configured": true, 00:13:38.691 "data_offset": 2048, 00:13:38.691 "data_size": 63488 00:13:38.691 }, 00:13:38.691 { 00:13:38.691 "name": null, 00:13:38.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:38.691 "is_configured": false, 00:13:38.691 "data_offset": 2048, 00:13:38.691 "data_size": 63488 00:13:38.691 }, 00:13:38.691 { 00:13:38.691 "name": null, 00:13:38.691 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:38.691 "is_configured": false, 00:13:38.691 "data_offset": 2048, 00:13:38.691 "data_size": 63488 00:13:38.691 }, 00:13:38.691 { 00:13:38.691 "name": null, 00:13:38.691 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:38.691 "is_configured": false, 00:13:38.691 "data_offset": 2048, 00:13:38.691 "data_size": 63488 00:13:38.691 } 00:13:38.691 ] 00:13:38.691 }' 00:13:38.691 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.950 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.210 [2024-12-06 04:04:32.391468] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:39.210 [2024-12-06 04:04:32.391587] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.210 [2024-12-06 04:04:32.391628] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:39.210 [2024-12-06 04:04:32.391659] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.210 [2024-12-06 04:04:32.392162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.210 [2024-12-06 04:04:32.392236] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:39.210 [2024-12-06 04:04:32.392353] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:39.210 [2024-12-06 04:04:32.392423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:39.210 pt2 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.210 [2024-12-06 04:04:32.403431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.210 "name": "raid_bdev1", 00:13:39.210 "uuid": "3aa91d68-16de-4600-bd42-46891854259f", 00:13:39.210 "strip_size_kb": 0, 00:13:39.210 "state": "configuring", 00:13:39.210 "raid_level": "raid1", 00:13:39.210 "superblock": true, 00:13:39.210 "num_base_bdevs": 4, 00:13:39.210 "num_base_bdevs_discovered": 1, 00:13:39.210 "num_base_bdevs_operational": 4, 00:13:39.210 "base_bdevs_list": [ 00:13:39.210 { 00:13:39.210 "name": "pt1", 00:13:39.210 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:39.210 "is_configured": true, 00:13:39.210 "data_offset": 2048, 00:13:39.210 "data_size": 63488 00:13:39.210 }, 00:13:39.210 { 00:13:39.210 "name": null, 00:13:39.210 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:39.210 "is_configured": false, 00:13:39.210 "data_offset": 0, 00:13:39.210 "data_size": 63488 00:13:39.210 }, 00:13:39.210 { 00:13:39.210 "name": null, 00:13:39.210 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:39.210 "is_configured": false, 00:13:39.210 "data_offset": 2048, 00:13:39.210 "data_size": 63488 00:13:39.210 }, 00:13:39.210 { 00:13:39.210 "name": null, 00:13:39.210 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:39.210 "is_configured": false, 00:13:39.210 "data_offset": 2048, 00:13:39.210 "data_size": 63488 00:13:39.210 } 00:13:39.210 ] 00:13:39.210 }' 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.210 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.779 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.780 [2024-12-06 04:04:32.878627] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:39.780 [2024-12-06 04:04:32.878735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.780 [2024-12-06 04:04:32.878773] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:39.780 [2024-12-06 04:04:32.878799] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.780 [2024-12-06 04:04:32.879307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.780 [2024-12-06 04:04:32.879363] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:39.780 [2024-12-06 04:04:32.879479] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:39.780 [2024-12-06 04:04:32.879529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:39.780 pt2 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.780 [2024-12-06 04:04:32.890570] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:39.780 [2024-12-06 04:04:32.890616] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.780 [2024-12-06 04:04:32.890633] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:39.780 [2024-12-06 04:04:32.890641] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.780 [2024-12-06 04:04:32.890986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.780 [2024-12-06 04:04:32.891001] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:39.780 [2024-12-06 04:04:32.891081] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:39.780 [2024-12-06 04:04:32.891100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:39.780 pt3 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.780 [2024-12-06 04:04:32.902544] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:39.780 [2024-12-06 04:04:32.902628] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.780 [2024-12-06 04:04:32.902659] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:39.780 [2024-12-06 04:04:32.902684] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.780 [2024-12-06 04:04:32.903079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.780 [2024-12-06 04:04:32.903132] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:39.780 [2024-12-06 04:04:32.903218] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:39.780 [2024-12-06 04:04:32.903272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:39.780 [2024-12-06 04:04:32.903434] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:39.780 [2024-12-06 04:04:32.903472] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:39.780 [2024-12-06 04:04:32.903722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:39.780 [2024-12-06 04:04:32.903924] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:39.780 [2024-12-06 04:04:32.903967] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:39.780 [2024-12-06 04:04:32.904134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.780 pt4 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.780 "name": "raid_bdev1", 00:13:39.780 "uuid": "3aa91d68-16de-4600-bd42-46891854259f", 00:13:39.780 "strip_size_kb": 0, 00:13:39.780 "state": "online", 00:13:39.780 "raid_level": "raid1", 00:13:39.780 "superblock": true, 00:13:39.780 "num_base_bdevs": 4, 00:13:39.780 "num_base_bdevs_discovered": 4, 00:13:39.780 "num_base_bdevs_operational": 4, 00:13:39.780 "base_bdevs_list": [ 00:13:39.780 { 00:13:39.780 "name": "pt1", 00:13:39.780 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:39.780 "is_configured": true, 00:13:39.780 "data_offset": 2048, 00:13:39.780 "data_size": 63488 00:13:39.780 }, 00:13:39.780 { 00:13:39.780 "name": "pt2", 00:13:39.780 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:39.780 "is_configured": true, 00:13:39.780 "data_offset": 2048, 00:13:39.780 "data_size": 63488 00:13:39.780 }, 00:13:39.780 { 00:13:39.780 "name": "pt3", 00:13:39.780 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:39.780 "is_configured": true, 00:13:39.780 "data_offset": 2048, 00:13:39.780 "data_size": 63488 00:13:39.780 }, 00:13:39.780 { 00:13:39.780 "name": "pt4", 00:13:39.780 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:39.780 "is_configured": true, 00:13:39.780 "data_offset": 2048, 00:13:39.780 "data_size": 63488 00:13:39.780 } 00:13:39.780 ] 00:13:39.780 }' 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.780 04:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.040 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:40.040 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:40.040 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:40.040 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:40.040 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:40.040 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:40.040 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:40.040 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.040 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.040 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:40.040 [2024-12-06 04:04:33.318218] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:40.040 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.040 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:40.040 "name": "raid_bdev1", 00:13:40.040 "aliases": [ 00:13:40.040 "3aa91d68-16de-4600-bd42-46891854259f" 00:13:40.040 ], 00:13:40.040 "product_name": "Raid Volume", 00:13:40.040 "block_size": 512, 00:13:40.040 "num_blocks": 63488, 00:13:40.040 "uuid": "3aa91d68-16de-4600-bd42-46891854259f", 00:13:40.040 "assigned_rate_limits": { 00:13:40.040 "rw_ios_per_sec": 0, 00:13:40.040 "rw_mbytes_per_sec": 0, 00:13:40.040 "r_mbytes_per_sec": 0, 00:13:40.040 "w_mbytes_per_sec": 0 00:13:40.040 }, 00:13:40.040 "claimed": false, 00:13:40.040 "zoned": false, 00:13:40.040 "supported_io_types": { 00:13:40.040 "read": true, 00:13:40.040 "write": true, 00:13:40.040 "unmap": false, 00:13:40.040 "flush": false, 00:13:40.040 "reset": true, 00:13:40.040 "nvme_admin": false, 00:13:40.040 "nvme_io": false, 00:13:40.040 "nvme_io_md": false, 00:13:40.040 "write_zeroes": true, 00:13:40.040 "zcopy": false, 00:13:40.040 "get_zone_info": false, 00:13:40.040 "zone_management": false, 00:13:40.040 "zone_append": false, 00:13:40.040 "compare": false, 00:13:40.040 "compare_and_write": false, 00:13:40.040 "abort": false, 00:13:40.040 "seek_hole": false, 00:13:40.040 "seek_data": false, 00:13:40.040 "copy": false, 00:13:40.040 "nvme_iov_md": false 00:13:40.040 }, 00:13:40.040 "memory_domains": [ 00:13:40.040 { 00:13:40.040 "dma_device_id": "system", 00:13:40.040 "dma_device_type": 1 00:13:40.040 }, 00:13:40.040 { 00:13:40.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.040 "dma_device_type": 2 00:13:40.040 }, 00:13:40.040 { 00:13:40.040 "dma_device_id": "system", 00:13:40.040 "dma_device_type": 1 00:13:40.040 }, 00:13:40.040 { 00:13:40.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.040 "dma_device_type": 2 00:13:40.040 }, 00:13:40.040 { 00:13:40.040 "dma_device_id": "system", 00:13:40.040 "dma_device_type": 1 00:13:40.040 }, 00:13:40.040 { 00:13:40.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.040 "dma_device_type": 2 00:13:40.040 }, 00:13:40.040 { 00:13:40.040 "dma_device_id": "system", 00:13:40.040 "dma_device_type": 1 00:13:40.040 }, 00:13:40.040 { 00:13:40.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.040 "dma_device_type": 2 00:13:40.040 } 00:13:40.040 ], 00:13:40.040 "driver_specific": { 00:13:40.040 "raid": { 00:13:40.040 "uuid": "3aa91d68-16de-4600-bd42-46891854259f", 00:13:40.040 "strip_size_kb": 0, 00:13:40.040 "state": "online", 00:13:40.040 "raid_level": "raid1", 00:13:40.040 "superblock": true, 00:13:40.040 "num_base_bdevs": 4, 00:13:40.040 "num_base_bdevs_discovered": 4, 00:13:40.040 "num_base_bdevs_operational": 4, 00:13:40.040 "base_bdevs_list": [ 00:13:40.040 { 00:13:40.040 "name": "pt1", 00:13:40.040 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:40.040 "is_configured": true, 00:13:40.040 "data_offset": 2048, 00:13:40.040 "data_size": 63488 00:13:40.040 }, 00:13:40.040 { 00:13:40.040 "name": "pt2", 00:13:40.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:40.040 "is_configured": true, 00:13:40.040 "data_offset": 2048, 00:13:40.040 "data_size": 63488 00:13:40.040 }, 00:13:40.040 { 00:13:40.040 "name": "pt3", 00:13:40.040 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:40.040 "is_configured": true, 00:13:40.040 "data_offset": 2048, 00:13:40.040 "data_size": 63488 00:13:40.040 }, 00:13:40.040 { 00:13:40.040 "name": "pt4", 00:13:40.040 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:40.040 "is_configured": true, 00:13:40.040 "data_offset": 2048, 00:13:40.040 "data_size": 63488 00:13:40.040 } 00:13:40.040 ] 00:13:40.040 } 00:13:40.040 } 00:13:40.040 }' 00:13:40.040 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:40.040 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:40.040 pt2 00:13:40.040 pt3 00:13:40.040 pt4' 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.301 [2024-12-06 04:04:33.613690] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3aa91d68-16de-4600-bd42-46891854259f '!=' 3aa91d68-16de-4600-bd42-46891854259f ']' 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.301 [2024-12-06 04:04:33.645325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.301 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.560 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.560 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.560 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.560 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.560 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.560 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.560 "name": "raid_bdev1", 00:13:40.560 "uuid": "3aa91d68-16de-4600-bd42-46891854259f", 00:13:40.560 "strip_size_kb": 0, 00:13:40.560 "state": "online", 00:13:40.560 "raid_level": "raid1", 00:13:40.560 "superblock": true, 00:13:40.560 "num_base_bdevs": 4, 00:13:40.560 "num_base_bdevs_discovered": 3, 00:13:40.560 "num_base_bdevs_operational": 3, 00:13:40.560 "base_bdevs_list": [ 00:13:40.560 { 00:13:40.560 "name": null, 00:13:40.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.560 "is_configured": false, 00:13:40.560 "data_offset": 0, 00:13:40.560 "data_size": 63488 00:13:40.560 }, 00:13:40.560 { 00:13:40.560 "name": "pt2", 00:13:40.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:40.560 "is_configured": true, 00:13:40.560 "data_offset": 2048, 00:13:40.560 "data_size": 63488 00:13:40.560 }, 00:13:40.560 { 00:13:40.560 "name": "pt3", 00:13:40.560 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:40.560 "is_configured": true, 00:13:40.560 "data_offset": 2048, 00:13:40.560 "data_size": 63488 00:13:40.560 }, 00:13:40.560 { 00:13:40.560 "name": "pt4", 00:13:40.560 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:40.560 "is_configured": true, 00:13:40.560 "data_offset": 2048, 00:13:40.560 "data_size": 63488 00:13:40.560 } 00:13:40.560 ] 00:13:40.560 }' 00:13:40.560 04:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.560 04:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.819 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:40.819 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.819 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.819 [2024-12-06 04:04:34.124509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:40.819 [2024-12-06 04:04:34.124588] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:40.819 [2024-12-06 04:04:34.124693] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.819 [2024-12-06 04:04:34.124800] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:40.819 [2024-12-06 04:04:34.124845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:40.819 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.819 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.819 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:40.819 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.819 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.819 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.078 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:41.078 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:41.078 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:41.078 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:41.078 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.079 [2024-12-06 04:04:34.212342] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:41.079 [2024-12-06 04:04:34.212459] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.079 [2024-12-06 04:04:34.212483] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:41.079 [2024-12-06 04:04:34.212493] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.079 [2024-12-06 04:04:34.214832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.079 [2024-12-06 04:04:34.214905] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:41.079 [2024-12-06 04:04:34.214994] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:41.079 [2024-12-06 04:04:34.215060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:41.079 pt2 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.079 "name": "raid_bdev1", 00:13:41.079 "uuid": "3aa91d68-16de-4600-bd42-46891854259f", 00:13:41.079 "strip_size_kb": 0, 00:13:41.079 "state": "configuring", 00:13:41.079 "raid_level": "raid1", 00:13:41.079 "superblock": true, 00:13:41.079 "num_base_bdevs": 4, 00:13:41.079 "num_base_bdevs_discovered": 1, 00:13:41.079 "num_base_bdevs_operational": 3, 00:13:41.079 "base_bdevs_list": [ 00:13:41.079 { 00:13:41.079 "name": null, 00:13:41.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.079 "is_configured": false, 00:13:41.079 "data_offset": 2048, 00:13:41.079 "data_size": 63488 00:13:41.079 }, 00:13:41.079 { 00:13:41.079 "name": "pt2", 00:13:41.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:41.079 "is_configured": true, 00:13:41.079 "data_offset": 2048, 00:13:41.079 "data_size": 63488 00:13:41.079 }, 00:13:41.079 { 00:13:41.079 "name": null, 00:13:41.079 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:41.079 "is_configured": false, 00:13:41.079 "data_offset": 2048, 00:13:41.079 "data_size": 63488 00:13:41.079 }, 00:13:41.079 { 00:13:41.079 "name": null, 00:13:41.079 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:41.079 "is_configured": false, 00:13:41.079 "data_offset": 2048, 00:13:41.079 "data_size": 63488 00:13:41.079 } 00:13:41.079 ] 00:13:41.079 }' 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.079 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.339 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:41.339 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:41.339 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:41.339 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.339 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.339 [2024-12-06 04:04:34.679595] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:41.339 [2024-12-06 04:04:34.679712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.339 [2024-12-06 04:04:34.679739] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:41.339 [2024-12-06 04:04:34.679748] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.339 [2024-12-06 04:04:34.680233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.339 [2024-12-06 04:04:34.680254] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:41.339 [2024-12-06 04:04:34.680337] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:41.339 [2024-12-06 04:04:34.680369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:41.339 pt3 00:13:41.339 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.339 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:41.339 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.339 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.339 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.339 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.339 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:41.339 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.339 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.339 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.339 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.598 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.598 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.598 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.598 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.598 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.598 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.598 "name": "raid_bdev1", 00:13:41.598 "uuid": "3aa91d68-16de-4600-bd42-46891854259f", 00:13:41.598 "strip_size_kb": 0, 00:13:41.598 "state": "configuring", 00:13:41.598 "raid_level": "raid1", 00:13:41.598 "superblock": true, 00:13:41.598 "num_base_bdevs": 4, 00:13:41.598 "num_base_bdevs_discovered": 2, 00:13:41.598 "num_base_bdevs_operational": 3, 00:13:41.598 "base_bdevs_list": [ 00:13:41.598 { 00:13:41.598 "name": null, 00:13:41.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.598 "is_configured": false, 00:13:41.598 "data_offset": 2048, 00:13:41.598 "data_size": 63488 00:13:41.598 }, 00:13:41.598 { 00:13:41.598 "name": "pt2", 00:13:41.598 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:41.598 "is_configured": true, 00:13:41.598 "data_offset": 2048, 00:13:41.598 "data_size": 63488 00:13:41.598 }, 00:13:41.598 { 00:13:41.598 "name": "pt3", 00:13:41.598 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:41.598 "is_configured": true, 00:13:41.598 "data_offset": 2048, 00:13:41.598 "data_size": 63488 00:13:41.598 }, 00:13:41.598 { 00:13:41.598 "name": null, 00:13:41.598 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:41.598 "is_configured": false, 00:13:41.598 "data_offset": 2048, 00:13:41.598 "data_size": 63488 00:13:41.598 } 00:13:41.598 ] 00:13:41.598 }' 00:13:41.598 04:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.598 04:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.857 [2024-12-06 04:04:35.118846] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:41.857 [2024-12-06 04:04:35.118963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.857 [2024-12-06 04:04:35.119015] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:41.857 [2024-12-06 04:04:35.119042] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.857 [2024-12-06 04:04:35.119530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.857 [2024-12-06 04:04:35.119591] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:41.857 [2024-12-06 04:04:35.119709] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:41.857 [2024-12-06 04:04:35.119759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:41.857 [2024-12-06 04:04:35.119928] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:41.857 [2024-12-06 04:04:35.119965] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:41.857 [2024-12-06 04:04:35.120243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:41.857 [2024-12-06 04:04:35.120447] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:41.857 [2024-12-06 04:04:35.120494] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:41.857 [2024-12-06 04:04:35.120676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.857 pt4 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.857 "name": "raid_bdev1", 00:13:41.857 "uuid": "3aa91d68-16de-4600-bd42-46891854259f", 00:13:41.857 "strip_size_kb": 0, 00:13:41.857 "state": "online", 00:13:41.857 "raid_level": "raid1", 00:13:41.857 "superblock": true, 00:13:41.857 "num_base_bdevs": 4, 00:13:41.857 "num_base_bdevs_discovered": 3, 00:13:41.857 "num_base_bdevs_operational": 3, 00:13:41.857 "base_bdevs_list": [ 00:13:41.857 { 00:13:41.857 "name": null, 00:13:41.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.857 "is_configured": false, 00:13:41.857 "data_offset": 2048, 00:13:41.857 "data_size": 63488 00:13:41.857 }, 00:13:41.857 { 00:13:41.857 "name": "pt2", 00:13:41.857 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:41.857 "is_configured": true, 00:13:41.857 "data_offset": 2048, 00:13:41.857 "data_size": 63488 00:13:41.857 }, 00:13:41.857 { 00:13:41.857 "name": "pt3", 00:13:41.857 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:41.857 "is_configured": true, 00:13:41.857 "data_offset": 2048, 00:13:41.857 "data_size": 63488 00:13:41.857 }, 00:13:41.857 { 00:13:41.857 "name": "pt4", 00:13:41.857 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:41.857 "is_configured": true, 00:13:41.857 "data_offset": 2048, 00:13:41.857 "data_size": 63488 00:13:41.857 } 00:13:41.857 ] 00:13:41.857 }' 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.857 04:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.424 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:42.424 04:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.424 04:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.424 [2024-12-06 04:04:35.534101] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:42.424 [2024-12-06 04:04:35.534181] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.424 [2024-12-06 04:04:35.534283] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.424 [2024-12-06 04:04:35.534383] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.424 [2024-12-06 04:04:35.534436] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.425 [2024-12-06 04:04:35.605962] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:42.425 [2024-12-06 04:04:35.606077] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.425 [2024-12-06 04:04:35.606132] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:13:42.425 [2024-12-06 04:04:35.606170] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.425 [2024-12-06 04:04:35.608372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.425 [2024-12-06 04:04:35.608447] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:42.425 [2024-12-06 04:04:35.608571] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:42.425 [2024-12-06 04:04:35.608644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:42.425 [2024-12-06 04:04:35.608830] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater pt1 00:13:42.425 than existing raid bdev raid_bdev1 (2) 00:13:42.425 [2024-12-06 04:04:35.608879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:42.425 [2024-12-06 04:04:35.608897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:42.425 [2024-12-06 04:04:35.608970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:42.425 [2024-12-06 04:04:35.609097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.425 "name": "raid_bdev1", 00:13:42.425 "uuid": "3aa91d68-16de-4600-bd42-46891854259f", 00:13:42.425 "strip_size_kb": 0, 00:13:42.425 "state": "configuring", 00:13:42.425 "raid_level": "raid1", 00:13:42.425 "superblock": true, 00:13:42.425 "num_base_bdevs": 4, 00:13:42.425 "num_base_bdevs_discovered": 2, 00:13:42.425 "num_base_bdevs_operational": 3, 00:13:42.425 "base_bdevs_list": [ 00:13:42.425 { 00:13:42.425 "name": null, 00:13:42.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.425 "is_configured": false, 00:13:42.425 "data_offset": 2048, 00:13:42.425 "data_size": 63488 00:13:42.425 }, 00:13:42.425 { 00:13:42.425 "name": "pt2", 00:13:42.425 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.425 "is_configured": true, 00:13:42.425 "data_offset": 2048, 00:13:42.425 "data_size": 63488 00:13:42.425 }, 00:13:42.425 { 00:13:42.425 "name": "pt3", 00:13:42.425 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:42.425 "is_configured": true, 00:13:42.425 "data_offset": 2048, 00:13:42.425 "data_size": 63488 00:13:42.425 }, 00:13:42.425 { 00:13:42.425 "name": null, 00:13:42.425 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:42.425 "is_configured": false, 00:13:42.425 "data_offset": 2048, 00:13:42.425 "data_size": 63488 00:13:42.425 } 00:13:42.425 ] 00:13:42.425 }' 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.425 04:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.993 [2024-12-06 04:04:36.109162] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:42.993 [2024-12-06 04:04:36.109278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.993 [2024-12-06 04:04:36.109322] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:42.993 [2024-12-06 04:04:36.109377] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.993 [2024-12-06 04:04:36.109923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.993 [2024-12-06 04:04:36.109986] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:42.993 [2024-12-06 04:04:36.110138] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:42.993 [2024-12-06 04:04:36.110197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:42.993 [2024-12-06 04:04:36.110377] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:42.993 [2024-12-06 04:04:36.110421] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:42.993 [2024-12-06 04:04:36.110747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:42.993 [2024-12-06 04:04:36.110918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:42.993 [2024-12-06 04:04:36.110931] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:42.993 [2024-12-06 04:04:36.111120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.993 pt4 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.993 "name": "raid_bdev1", 00:13:42.993 "uuid": "3aa91d68-16de-4600-bd42-46891854259f", 00:13:42.993 "strip_size_kb": 0, 00:13:42.993 "state": "online", 00:13:42.993 "raid_level": "raid1", 00:13:42.993 "superblock": true, 00:13:42.993 "num_base_bdevs": 4, 00:13:42.993 "num_base_bdevs_discovered": 3, 00:13:42.993 "num_base_bdevs_operational": 3, 00:13:42.993 "base_bdevs_list": [ 00:13:42.993 { 00:13:42.993 "name": null, 00:13:42.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.993 "is_configured": false, 00:13:42.993 "data_offset": 2048, 00:13:42.993 "data_size": 63488 00:13:42.993 }, 00:13:42.993 { 00:13:42.993 "name": "pt2", 00:13:42.993 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.993 "is_configured": true, 00:13:42.993 "data_offset": 2048, 00:13:42.993 "data_size": 63488 00:13:42.993 }, 00:13:42.993 { 00:13:42.993 "name": "pt3", 00:13:42.993 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:42.993 "is_configured": true, 00:13:42.993 "data_offset": 2048, 00:13:42.993 "data_size": 63488 00:13:42.993 }, 00:13:42.993 { 00:13:42.993 "name": "pt4", 00:13:42.993 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:42.993 "is_configured": true, 00:13:42.993 "data_offset": 2048, 00:13:42.993 "data_size": 63488 00:13:42.993 } 00:13:42.993 ] 00:13:42.993 }' 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.993 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.253 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:43.253 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:43.253 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.253 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.253 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.253 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:43.253 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:43.253 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.253 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:43.253 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.512 [2024-12-06 04:04:36.608677] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.512 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.512 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3aa91d68-16de-4600-bd42-46891854259f '!=' 3aa91d68-16de-4600-bd42-46891854259f ']' 00:13:43.512 04:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74652 00:13:43.512 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74652 ']' 00:13:43.512 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74652 00:13:43.512 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:43.512 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:43.512 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74652 00:13:43.512 killing process with pid 74652 00:13:43.512 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:43.512 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:43.512 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74652' 00:13:43.512 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74652 00:13:43.512 [2024-12-06 04:04:36.651633] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:43.512 [2024-12-06 04:04:36.651721] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.512 [2024-12-06 04:04:36.651800] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.512 [2024-12-06 04:04:36.651813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:43.512 04:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74652 00:13:43.771 [2024-12-06 04:04:37.056111] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:45.150 ************************************ 00:13:45.150 END TEST raid_superblock_test 00:13:45.150 ************************************ 00:13:45.150 04:04:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:45.150 00:13:45.150 real 0m8.366s 00:13:45.150 user 0m13.076s 00:13:45.150 sys 0m1.483s 00:13:45.150 04:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.150 04:04:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.150 04:04:38 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:13:45.150 04:04:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:45.150 04:04:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.150 04:04:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:45.150 ************************************ 00:13:45.150 START TEST raid_read_error_test 00:13:45.150 ************************************ 00:13:45.150 04:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:13:45.150 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:45.150 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:45.150 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:45.150 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:45.150 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:45.150 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:45.150 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:45.150 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:45.150 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:45.150 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:45.150 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:45.150 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:45.150 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fa6II6gI5t 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75134 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75134 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75134 ']' 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:45.151 04:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.151 [2024-12-06 04:04:38.397317] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:13:45.151 [2024-12-06 04:04:38.397443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75134 ] 00:13:45.409 [2024-12-06 04:04:38.570641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.410 [2024-12-06 04:04:38.689266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.668 [2024-12-06 04:04:38.893254] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.668 [2024-12-06 04:04:38.893321] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.928 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.928 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:45.928 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:45.928 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:45.928 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.928 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.188 BaseBdev1_malloc 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.188 true 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.188 [2024-12-06 04:04:39.303310] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:46.188 [2024-12-06 04:04:39.303367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.188 [2024-12-06 04:04:39.303387] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:46.188 [2024-12-06 04:04:39.303399] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.188 [2024-12-06 04:04:39.305514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.188 [2024-12-06 04:04:39.305632] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:46.188 BaseBdev1 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.188 BaseBdev2_malloc 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.188 true 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.188 [2024-12-06 04:04:39.366228] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:46.188 [2024-12-06 04:04:39.366329] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.188 [2024-12-06 04:04:39.366363] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:46.188 [2024-12-06 04:04:39.366396] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.188 [2024-12-06 04:04:39.368419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.188 [2024-12-06 04:04:39.368507] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:46.188 BaseBdev2 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.188 BaseBdev3_malloc 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.188 true 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.188 [2024-12-06 04:04:39.446448] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:46.188 [2024-12-06 04:04:39.446544] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.188 [2024-12-06 04:04:39.446579] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:46.188 [2024-12-06 04:04:39.446612] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.188 [2024-12-06 04:04:39.448631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.188 [2024-12-06 04:04:39.448706] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:46.188 BaseBdev3 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.188 BaseBdev4_malloc 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.188 true 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.188 [2024-12-06 04:04:39.515002] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:46.188 [2024-12-06 04:04:39.515107] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.188 [2024-12-06 04:04:39.515141] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:46.188 [2024-12-06 04:04:39.515189] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.188 [2024-12-06 04:04:39.517218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.188 [2024-12-06 04:04:39.517292] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:46.188 BaseBdev4 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.188 [2024-12-06 04:04:39.527034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:46.188 [2024-12-06 04:04:39.528791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:46.188 [2024-12-06 04:04:39.528904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:46.188 [2024-12-06 04:04:39.528984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:46.188 [2024-12-06 04:04:39.529247] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:46.188 [2024-12-06 04:04:39.529297] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:46.188 [2024-12-06 04:04:39.529537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:46.188 [2024-12-06 04:04:39.529728] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:46.188 [2024-12-06 04:04:39.529767] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:46.188 [2024-12-06 04:04:39.529949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.188 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.449 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.449 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.449 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.449 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.449 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.449 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.449 "name": "raid_bdev1", 00:13:46.449 "uuid": "a46fa9db-569c-45c6-ad94-2204a10bce27", 00:13:46.449 "strip_size_kb": 0, 00:13:46.449 "state": "online", 00:13:46.449 "raid_level": "raid1", 00:13:46.449 "superblock": true, 00:13:46.449 "num_base_bdevs": 4, 00:13:46.449 "num_base_bdevs_discovered": 4, 00:13:46.449 "num_base_bdevs_operational": 4, 00:13:46.449 "base_bdevs_list": [ 00:13:46.449 { 00:13:46.449 "name": "BaseBdev1", 00:13:46.449 "uuid": "7da42c9a-43cc-5ffa-a355-ce55e4e8ed0e", 00:13:46.449 "is_configured": true, 00:13:46.449 "data_offset": 2048, 00:13:46.449 "data_size": 63488 00:13:46.449 }, 00:13:46.449 { 00:13:46.449 "name": "BaseBdev2", 00:13:46.449 "uuid": "9ee42536-4163-5191-98a4-01bd0aeed0ca", 00:13:46.449 "is_configured": true, 00:13:46.449 "data_offset": 2048, 00:13:46.449 "data_size": 63488 00:13:46.449 }, 00:13:46.449 { 00:13:46.449 "name": "BaseBdev3", 00:13:46.449 "uuid": "ca608b6e-0881-5465-8037-d832c7c8296d", 00:13:46.449 "is_configured": true, 00:13:46.449 "data_offset": 2048, 00:13:46.449 "data_size": 63488 00:13:46.449 }, 00:13:46.449 { 00:13:46.449 "name": "BaseBdev4", 00:13:46.449 "uuid": "4658f914-30ec-5129-afb4-1faab3859653", 00:13:46.449 "is_configured": true, 00:13:46.449 "data_offset": 2048, 00:13:46.449 "data_size": 63488 00:13:46.449 } 00:13:46.449 ] 00:13:46.449 }' 00:13:46.449 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.449 04:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.708 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:46.708 04:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:46.708 [2024-12-06 04:04:40.047564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:47.644 04:04:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:47.644 04:04:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.644 04:04:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.644 04:04:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.644 04:04:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:47.644 04:04:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:47.644 04:04:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:47.644 04:04:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:47.644 04:04:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:47.644 04:04:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.644 04:04:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.644 04:04:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.644 04:04:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.644 04:04:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.644 04:04:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.644 04:04:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.644 04:04:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.644 04:04:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.905 04:04:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.905 04:04:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.905 04:04:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.905 04:04:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.906 04:04:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.906 04:04:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.906 "name": "raid_bdev1", 00:13:47.906 "uuid": "a46fa9db-569c-45c6-ad94-2204a10bce27", 00:13:47.906 "strip_size_kb": 0, 00:13:47.906 "state": "online", 00:13:47.906 "raid_level": "raid1", 00:13:47.906 "superblock": true, 00:13:47.906 "num_base_bdevs": 4, 00:13:47.906 "num_base_bdevs_discovered": 4, 00:13:47.906 "num_base_bdevs_operational": 4, 00:13:47.906 "base_bdevs_list": [ 00:13:47.906 { 00:13:47.906 "name": "BaseBdev1", 00:13:47.906 "uuid": "7da42c9a-43cc-5ffa-a355-ce55e4e8ed0e", 00:13:47.906 "is_configured": true, 00:13:47.906 "data_offset": 2048, 00:13:47.906 "data_size": 63488 00:13:47.906 }, 00:13:47.906 { 00:13:47.906 "name": "BaseBdev2", 00:13:47.906 "uuid": "9ee42536-4163-5191-98a4-01bd0aeed0ca", 00:13:47.906 "is_configured": true, 00:13:47.906 "data_offset": 2048, 00:13:47.906 "data_size": 63488 00:13:47.906 }, 00:13:47.906 { 00:13:47.906 "name": "BaseBdev3", 00:13:47.906 "uuid": "ca608b6e-0881-5465-8037-d832c7c8296d", 00:13:47.906 "is_configured": true, 00:13:47.906 "data_offset": 2048, 00:13:47.906 "data_size": 63488 00:13:47.906 }, 00:13:47.906 { 00:13:47.906 "name": "BaseBdev4", 00:13:47.906 "uuid": "4658f914-30ec-5129-afb4-1faab3859653", 00:13:47.906 "is_configured": true, 00:13:47.906 "data_offset": 2048, 00:13:47.906 "data_size": 63488 00:13:47.906 } 00:13:47.906 ] 00:13:47.906 }' 00:13:47.906 04:04:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.906 04:04:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.166 04:04:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:48.166 04:04:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.166 04:04:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.166 [2024-12-06 04:04:41.496502] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:48.166 [2024-12-06 04:04:41.496590] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:48.166 [2024-12-06 04:04:41.499376] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:48.166 [2024-12-06 04:04:41.499474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.166 [2024-12-06 04:04:41.499640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:48.166 [2024-12-06 04:04:41.499690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:48.166 { 00:13:48.166 "results": [ 00:13:48.166 { 00:13:48.166 "job": "raid_bdev1", 00:13:48.166 "core_mask": "0x1", 00:13:48.166 "workload": "randrw", 00:13:48.166 "percentage": 50, 00:13:48.166 "status": "finished", 00:13:48.166 "queue_depth": 1, 00:13:48.166 "io_size": 131072, 00:13:48.166 "runtime": 1.450044, 00:13:48.166 "iops": 10456.924065752488, 00:13:48.166 "mibps": 1307.115508219061, 00:13:48.166 "io_failed": 0, 00:13:48.166 "io_timeout": 0, 00:13:48.166 "avg_latency_us": 92.89648952993194, 00:13:48.166 "min_latency_us": 24.482096069868994, 00:13:48.166 "max_latency_us": 1531.0812227074236 00:13:48.166 } 00:13:48.166 ], 00:13:48.166 "core_count": 1 00:13:48.166 } 00:13:48.166 04:04:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.166 04:04:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75134 00:13:48.166 04:04:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75134 ']' 00:13:48.166 04:04:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75134 00:13:48.166 04:04:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:48.166 04:04:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.166 04:04:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75134 00:13:48.454 04:04:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.454 killing process with pid 75134 00:13:48.454 04:04:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.454 04:04:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75134' 00:13:48.454 04:04:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75134 00:13:48.454 [2024-12-06 04:04:41.533424] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:48.454 04:04:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75134 00:13:48.733 [2024-12-06 04:04:41.853643] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:49.677 04:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:49.677 04:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fa6II6gI5t 00:13:49.677 04:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:49.936 04:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:49.936 ************************************ 00:13:49.936 END TEST raid_read_error_test 00:13:49.936 ************************************ 00:13:49.936 04:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:49.936 04:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:49.936 04:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:49.936 04:04:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:49.936 00:13:49.936 real 0m4.759s 00:13:49.936 user 0m5.647s 00:13:49.936 sys 0m0.598s 00:13:49.936 04:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.936 04:04:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.936 04:04:43 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:13:49.936 04:04:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:49.936 04:04:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.936 04:04:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:49.936 ************************************ 00:13:49.936 START TEST raid_write_error_test 00:13:49.936 ************************************ 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XbSRUxj5V6 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75285 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75285 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:49.936 04:04:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75285 ']' 00:13:49.937 04:04:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.937 04:04:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.937 04:04:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.937 04:04:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.937 04:04:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.937 [2024-12-06 04:04:43.199402] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:13:49.937 [2024-12-06 04:04:43.199599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75285 ] 00:13:50.195 [2024-12-06 04:04:43.371654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.195 [2024-12-06 04:04:43.497172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.455 [2024-12-06 04:04:43.695376] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.455 [2024-12-06 04:04:43.695418] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.714 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.714 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:50.714 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:50.714 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:50.714 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.714 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.976 BaseBdev1_malloc 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.976 true 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.976 [2024-12-06 04:04:44.094619] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:50.976 [2024-12-06 04:04:44.094735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.976 [2024-12-06 04:04:44.094776] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:50.976 [2024-12-06 04:04:44.094808] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.976 [2024-12-06 04:04:44.097123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.976 [2024-12-06 04:04:44.097203] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:50.976 BaseBdev1 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.976 BaseBdev2_malloc 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.976 true 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.976 [2024-12-06 04:04:44.165987] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:50.976 [2024-12-06 04:04:44.166140] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.976 [2024-12-06 04:04:44.166201] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:50.976 [2024-12-06 04:04:44.166220] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.976 [2024-12-06 04:04:44.168733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.976 [2024-12-06 04:04:44.168778] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:50.976 BaseBdev2 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.976 BaseBdev3_malloc 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.976 true 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.976 [2024-12-06 04:04:44.244875] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:50.976 [2024-12-06 04:04:44.244992] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.976 [2024-12-06 04:04:44.245020] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:50.976 [2024-12-06 04:04:44.245034] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.976 [2024-12-06 04:04:44.247393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.976 [2024-12-06 04:04:44.247435] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:50.976 BaseBdev3 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.976 BaseBdev4_malloc 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.976 true 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.976 [2024-12-06 04:04:44.311684] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:50.976 [2024-12-06 04:04:44.311742] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.976 [2024-12-06 04:04:44.311763] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:50.976 [2024-12-06 04:04:44.311775] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.976 [2024-12-06 04:04:44.314204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.976 [2024-12-06 04:04:44.314249] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:50.976 BaseBdev4 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.976 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.976 [2024-12-06 04:04:44.323727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.976 [2024-12-06 04:04:44.325870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:50.976 [2024-12-06 04:04:44.325993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:50.976 [2024-12-06 04:04:44.326106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:50.976 [2024-12-06 04:04:44.326401] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:50.976 [2024-12-06 04:04:44.326453] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:50.976 [2024-12-06 04:04:44.326741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:50.976 [2024-12-06 04:04:44.326965] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:50.976 [2024-12-06 04:04:44.327010] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:50.976 [2024-12-06 04:04:44.327241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.237 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.237 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:51.237 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.237 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.237 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.237 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.237 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.237 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.237 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.237 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.237 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.237 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.237 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.237 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.237 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.237 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.237 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.237 "name": "raid_bdev1", 00:13:51.237 "uuid": "3c1e5d1b-a0fd-419e-b0a8-52362cfedf78", 00:13:51.237 "strip_size_kb": 0, 00:13:51.237 "state": "online", 00:13:51.237 "raid_level": "raid1", 00:13:51.237 "superblock": true, 00:13:51.237 "num_base_bdevs": 4, 00:13:51.237 "num_base_bdevs_discovered": 4, 00:13:51.237 "num_base_bdevs_operational": 4, 00:13:51.237 "base_bdevs_list": [ 00:13:51.237 { 00:13:51.237 "name": "BaseBdev1", 00:13:51.237 "uuid": "94ea214d-b7ec-53c2-8a5f-7417e60a46cf", 00:13:51.237 "is_configured": true, 00:13:51.237 "data_offset": 2048, 00:13:51.237 "data_size": 63488 00:13:51.237 }, 00:13:51.237 { 00:13:51.237 "name": "BaseBdev2", 00:13:51.237 "uuid": "7972b418-2c21-51a3-9c16-9c4aa0e0d9a9", 00:13:51.237 "is_configured": true, 00:13:51.237 "data_offset": 2048, 00:13:51.237 "data_size": 63488 00:13:51.237 }, 00:13:51.237 { 00:13:51.237 "name": "BaseBdev3", 00:13:51.237 "uuid": "8e920a18-0a82-5008-9258-655afa73bbea", 00:13:51.237 "is_configured": true, 00:13:51.237 "data_offset": 2048, 00:13:51.237 "data_size": 63488 00:13:51.237 }, 00:13:51.237 { 00:13:51.237 "name": "BaseBdev4", 00:13:51.237 "uuid": "61b9860f-d191-5bd1-91e7-37d5ea74899f", 00:13:51.237 "is_configured": true, 00:13:51.237 "data_offset": 2048, 00:13:51.237 "data_size": 63488 00:13:51.237 } 00:13:51.237 ] 00:13:51.237 }' 00:13:51.237 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.237 04:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.497 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:51.497 04:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:51.497 [2024-12-06 04:04:44.812348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.436 [2024-12-06 04:04:45.719113] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:52.436 [2024-12-06 04:04:45.719261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:52.436 [2024-12-06 04:04:45.719530] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.436 "name": "raid_bdev1", 00:13:52.436 "uuid": "3c1e5d1b-a0fd-419e-b0a8-52362cfedf78", 00:13:52.436 "strip_size_kb": 0, 00:13:52.436 "state": "online", 00:13:52.436 "raid_level": "raid1", 00:13:52.436 "superblock": true, 00:13:52.436 "num_base_bdevs": 4, 00:13:52.436 "num_base_bdevs_discovered": 3, 00:13:52.436 "num_base_bdevs_operational": 3, 00:13:52.436 "base_bdevs_list": [ 00:13:52.436 { 00:13:52.436 "name": null, 00:13:52.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.436 "is_configured": false, 00:13:52.436 "data_offset": 0, 00:13:52.436 "data_size": 63488 00:13:52.436 }, 00:13:52.436 { 00:13:52.436 "name": "BaseBdev2", 00:13:52.436 "uuid": "7972b418-2c21-51a3-9c16-9c4aa0e0d9a9", 00:13:52.436 "is_configured": true, 00:13:52.436 "data_offset": 2048, 00:13:52.436 "data_size": 63488 00:13:52.436 }, 00:13:52.436 { 00:13:52.436 "name": "BaseBdev3", 00:13:52.436 "uuid": "8e920a18-0a82-5008-9258-655afa73bbea", 00:13:52.436 "is_configured": true, 00:13:52.436 "data_offset": 2048, 00:13:52.436 "data_size": 63488 00:13:52.436 }, 00:13:52.436 { 00:13:52.436 "name": "BaseBdev4", 00:13:52.436 "uuid": "61b9860f-d191-5bd1-91e7-37d5ea74899f", 00:13:52.436 "is_configured": true, 00:13:52.436 "data_offset": 2048, 00:13:52.436 "data_size": 63488 00:13:52.436 } 00:13:52.436 ] 00:13:52.436 }' 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.436 04:04:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.003 04:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:53.003 04:04:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.003 04:04:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.003 [2024-12-06 04:04:46.175171] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:53.003 [2024-12-06 04:04:46.175274] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:53.003 [2024-12-06 04:04:46.178295] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.003 [2024-12-06 04:04:46.178387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.003 [2024-12-06 04:04:46.178510] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.003 [2024-12-06 04:04:46.178561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:53.003 04:04:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.003 { 00:13:53.003 "results": [ 00:13:53.003 { 00:13:53.003 "job": "raid_bdev1", 00:13:53.003 "core_mask": "0x1", 00:13:53.003 "workload": "randrw", 00:13:53.003 "percentage": 50, 00:13:53.003 "status": "finished", 00:13:53.003 "queue_depth": 1, 00:13:53.003 "io_size": 131072, 00:13:53.003 "runtime": 1.363695, 00:13:53.003 "iops": 11192.385394094721, 00:13:53.003 "mibps": 1399.0481742618401, 00:13:53.003 "io_failed": 0, 00:13:53.003 "io_timeout": 0, 00:13:53.003 "avg_latency_us": 86.57847332948617, 00:13:53.003 "min_latency_us": 23.923144104803495, 00:13:53.003 "max_latency_us": 1531.0812227074236 00:13:53.003 } 00:13:53.003 ], 00:13:53.003 "core_count": 1 00:13:53.003 } 00:13:53.003 04:04:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75285 00:13:53.003 04:04:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75285 ']' 00:13:53.003 04:04:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75285 00:13:53.003 04:04:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:53.003 04:04:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:53.003 04:04:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75285 00:13:53.003 04:04:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:53.003 04:04:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:53.003 04:04:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75285' 00:13:53.003 killing process with pid 75285 00:13:53.004 04:04:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75285 00:13:53.004 [2024-12-06 04:04:46.223718] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:53.004 04:04:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75285 00:13:53.262 [2024-12-06 04:04:46.562936] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:54.644 04:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XbSRUxj5V6 00:13:54.644 04:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:54.644 04:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:54.644 04:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:54.644 04:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:54.644 ************************************ 00:13:54.644 END TEST raid_write_error_test 00:13:54.644 ************************************ 00:13:54.644 04:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:54.644 04:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:54.644 04:04:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:54.644 00:13:54.644 real 0m4.708s 00:13:54.644 user 0m5.475s 00:13:54.644 sys 0m0.597s 00:13:54.644 04:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.644 04:04:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.644 04:04:47 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:13:54.644 04:04:47 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:54.644 04:04:47 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:13:54.644 04:04:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:54.644 04:04:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.644 04:04:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:54.644 ************************************ 00:13:54.644 START TEST raid_rebuild_test 00:13:54.644 ************************************ 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75423 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75423 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75423 ']' 00:13:54.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.644 04:04:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.644 [2024-12-06 04:04:47.982412] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:13:54.644 [2024-12-06 04:04:47.982609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:54.644 Zero copy mechanism will not be used. 00:13:54.644 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75423 ] 00:13:54.904 [2024-12-06 04:04:48.156495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.163 [2024-12-06 04:04:48.274956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.163 [2024-12-06 04:04:48.478079] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.163 [2024-12-06 04:04:48.478208] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.732 04:04:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.732 04:04:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:55.732 04:04:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:55.732 04:04:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:55.732 04:04:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.732 04:04:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.732 BaseBdev1_malloc 00:13:55.732 04:04:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.732 04:04:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:55.732 04:04:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.733 04:04:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.733 [2024-12-06 04:04:48.875921] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:55.733 [2024-12-06 04:04:48.876078] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.733 [2024-12-06 04:04:48.876151] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:55.733 [2024-12-06 04:04:48.876202] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.733 [2024-12-06 04:04:48.878697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.733 BaseBdev1 00:13:55.733 [2024-12-06 04:04:48.878787] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:55.733 04:04:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.733 04:04:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:55.733 04:04:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:55.733 04:04:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.733 04:04:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.733 BaseBdev2_malloc 00:13:55.733 04:04:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.733 04:04:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:55.733 04:04:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.733 04:04:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.733 [2024-12-06 04:04:48.933770] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:55.733 [2024-12-06 04:04:48.933890] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.733 [2024-12-06 04:04:48.933934] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:55.733 [2024-12-06 04:04:48.933970] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.733 [2024-12-06 04:04:48.936200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.733 [2024-12-06 04:04:48.936273] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:55.733 BaseBdev2 00:13:55.733 04:04:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.733 04:04:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:55.733 04:04:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.733 04:04:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.733 spare_malloc 00:13:55.733 04:04:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.733 04:04:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:55.733 04:04:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.733 04:04:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.733 spare_delay 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.733 [2024-12-06 04:04:49.011146] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:55.733 [2024-12-06 04:04:49.011265] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.733 [2024-12-06 04:04:49.011311] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:55.733 [2024-12-06 04:04:49.011347] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.733 [2024-12-06 04:04:49.013855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.733 [2024-12-06 04:04:49.013934] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:55.733 spare 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.733 [2024-12-06 04:04:49.023217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.733 [2024-12-06 04:04:49.025261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:55.733 [2024-12-06 04:04:49.025405] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:55.733 [2024-12-06 04:04:49.025443] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:55.733 [2024-12-06 04:04:49.025746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:55.733 [2024-12-06 04:04:49.025960] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:55.733 [2024-12-06 04:04:49.026005] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:55.733 [2024-12-06 04:04:49.026245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.733 "name": "raid_bdev1", 00:13:55.733 "uuid": "ae4b42ff-e622-4a79-ac7b-d4b2a5c397d3", 00:13:55.733 "strip_size_kb": 0, 00:13:55.733 "state": "online", 00:13:55.733 "raid_level": "raid1", 00:13:55.733 "superblock": false, 00:13:55.733 "num_base_bdevs": 2, 00:13:55.733 "num_base_bdevs_discovered": 2, 00:13:55.733 "num_base_bdevs_operational": 2, 00:13:55.733 "base_bdevs_list": [ 00:13:55.733 { 00:13:55.733 "name": "BaseBdev1", 00:13:55.733 "uuid": "4c3f6cf2-926c-5ecd-b019-0a12e57f9abd", 00:13:55.733 "is_configured": true, 00:13:55.733 "data_offset": 0, 00:13:55.733 "data_size": 65536 00:13:55.733 }, 00:13:55.733 { 00:13:55.733 "name": "BaseBdev2", 00:13:55.733 "uuid": "d0ac05a1-a6c0-55ff-b689-be4e31444aa0", 00:13:55.733 "is_configured": true, 00:13:55.733 "data_offset": 0, 00:13:55.733 "data_size": 65536 00:13:55.733 } 00:13:55.733 ] 00:13:55.733 }' 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.733 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.302 [2024-12-06 04:04:49.478718] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:56.302 04:04:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:56.562 [2024-12-06 04:04:49.825868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:56.562 /dev/nbd0 00:13:56.562 04:04:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:56.562 04:04:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:56.562 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:56.562 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:56.562 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:56.562 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:56.562 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:56.562 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:56.562 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:56.562 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:56.562 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.562 1+0 records in 00:13:56.562 1+0 records out 00:13:56.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298381 s, 13.7 MB/s 00:13:56.562 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.562 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:56.562 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.562 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:56.562 04:04:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:56.562 04:04:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:56.562 04:04:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:56.562 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:56.562 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:56.562 04:04:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:01.872 65536+0 records in 00:14:01.872 65536+0 records out 00:14:01.872 33554432 bytes (34 MB, 32 MiB) copied, 4.6266 s, 7.3 MB/s 00:14:01.872 04:04:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:01.872 04:04:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.872 04:04:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:01.872 04:04:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:01.872 04:04:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:01.872 04:04:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.872 04:04:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:01.872 04:04:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:01.872 04:04:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:01.873 [2024-12-06 04:04:54.771538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.873 [2024-12-06 04:04:54.779642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.873 "name": "raid_bdev1", 00:14:01.873 "uuid": "ae4b42ff-e622-4a79-ac7b-d4b2a5c397d3", 00:14:01.873 "strip_size_kb": 0, 00:14:01.873 "state": "online", 00:14:01.873 "raid_level": "raid1", 00:14:01.873 "superblock": false, 00:14:01.873 "num_base_bdevs": 2, 00:14:01.873 "num_base_bdevs_discovered": 1, 00:14:01.873 "num_base_bdevs_operational": 1, 00:14:01.873 "base_bdevs_list": [ 00:14:01.873 { 00:14:01.873 "name": null, 00:14:01.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.873 "is_configured": false, 00:14:01.873 "data_offset": 0, 00:14:01.873 "data_size": 65536 00:14:01.873 }, 00:14:01.873 { 00:14:01.873 "name": "BaseBdev2", 00:14:01.873 "uuid": "d0ac05a1-a6c0-55ff-b689-be4e31444aa0", 00:14:01.873 "is_configured": true, 00:14:01.873 "data_offset": 0, 00:14:01.873 "data_size": 65536 00:14:01.873 } 00:14:01.873 ] 00:14:01.873 }' 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.873 04:04:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.132 04:04:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:02.132 04:04:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.132 04:04:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.132 [2024-12-06 04:04:55.266882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.132 [2024-12-06 04:04:55.285777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:14:02.132 04:04:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.132 04:04:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:02.132 [2024-12-06 04:04:55.287996] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:03.069 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.069 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.069 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.069 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.069 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.069 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.069 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.069 04:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.069 04:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.069 04:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.069 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.069 "name": "raid_bdev1", 00:14:03.069 "uuid": "ae4b42ff-e622-4a79-ac7b-d4b2a5c397d3", 00:14:03.069 "strip_size_kb": 0, 00:14:03.069 "state": "online", 00:14:03.069 "raid_level": "raid1", 00:14:03.069 "superblock": false, 00:14:03.069 "num_base_bdevs": 2, 00:14:03.069 "num_base_bdevs_discovered": 2, 00:14:03.069 "num_base_bdevs_operational": 2, 00:14:03.069 "process": { 00:14:03.069 "type": "rebuild", 00:14:03.069 "target": "spare", 00:14:03.069 "progress": { 00:14:03.069 "blocks": 20480, 00:14:03.069 "percent": 31 00:14:03.069 } 00:14:03.069 }, 00:14:03.069 "base_bdevs_list": [ 00:14:03.069 { 00:14:03.069 "name": "spare", 00:14:03.069 "uuid": "e5258d2f-3871-5310-8839-35881936ba95", 00:14:03.069 "is_configured": true, 00:14:03.069 "data_offset": 0, 00:14:03.069 "data_size": 65536 00:14:03.069 }, 00:14:03.069 { 00:14:03.069 "name": "BaseBdev2", 00:14:03.069 "uuid": "d0ac05a1-a6c0-55ff-b689-be4e31444aa0", 00:14:03.069 "is_configured": true, 00:14:03.069 "data_offset": 0, 00:14:03.069 "data_size": 65536 00:14:03.069 } 00:14:03.069 ] 00:14:03.069 }' 00:14:03.069 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.069 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.069 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.328 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.328 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:03.328 04:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.328 04:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.328 [2024-12-06 04:04:56.446956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.328 [2024-12-06 04:04:56.494080] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:03.328 [2024-12-06 04:04:56.494179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.328 [2024-12-06 04:04:56.494196] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.328 [2024-12-06 04:04:56.494206] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:03.328 04:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.328 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:03.328 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.328 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.328 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.328 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.328 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:03.328 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.328 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.328 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.328 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.328 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.329 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.329 04:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.329 04:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.329 04:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.329 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.329 "name": "raid_bdev1", 00:14:03.329 "uuid": "ae4b42ff-e622-4a79-ac7b-d4b2a5c397d3", 00:14:03.329 "strip_size_kb": 0, 00:14:03.329 "state": "online", 00:14:03.329 "raid_level": "raid1", 00:14:03.329 "superblock": false, 00:14:03.329 "num_base_bdevs": 2, 00:14:03.329 "num_base_bdevs_discovered": 1, 00:14:03.329 "num_base_bdevs_operational": 1, 00:14:03.329 "base_bdevs_list": [ 00:14:03.329 { 00:14:03.329 "name": null, 00:14:03.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.329 "is_configured": false, 00:14:03.329 "data_offset": 0, 00:14:03.329 "data_size": 65536 00:14:03.329 }, 00:14:03.329 { 00:14:03.329 "name": "BaseBdev2", 00:14:03.329 "uuid": "d0ac05a1-a6c0-55ff-b689-be4e31444aa0", 00:14:03.329 "is_configured": true, 00:14:03.329 "data_offset": 0, 00:14:03.329 "data_size": 65536 00:14:03.329 } 00:14:03.329 ] 00:14:03.329 }' 00:14:03.329 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.329 04:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.897 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:03.897 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.897 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:03.897 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:03.897 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.897 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.897 04:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.897 04:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.897 04:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.897 04:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.897 04:04:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.897 "name": "raid_bdev1", 00:14:03.897 "uuid": "ae4b42ff-e622-4a79-ac7b-d4b2a5c397d3", 00:14:03.897 "strip_size_kb": 0, 00:14:03.897 "state": "online", 00:14:03.897 "raid_level": "raid1", 00:14:03.897 "superblock": false, 00:14:03.897 "num_base_bdevs": 2, 00:14:03.897 "num_base_bdevs_discovered": 1, 00:14:03.897 "num_base_bdevs_operational": 1, 00:14:03.897 "base_bdevs_list": [ 00:14:03.897 { 00:14:03.897 "name": null, 00:14:03.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.897 "is_configured": false, 00:14:03.897 "data_offset": 0, 00:14:03.897 "data_size": 65536 00:14:03.897 }, 00:14:03.897 { 00:14:03.897 "name": "BaseBdev2", 00:14:03.897 "uuid": "d0ac05a1-a6c0-55ff-b689-be4e31444aa0", 00:14:03.897 "is_configured": true, 00:14:03.897 "data_offset": 0, 00:14:03.897 "data_size": 65536 00:14:03.897 } 00:14:03.897 ] 00:14:03.897 }' 00:14:03.897 04:04:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.897 04:04:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:03.897 04:04:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.897 04:04:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:03.897 04:04:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:03.897 04:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.897 04:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.897 [2024-12-06 04:04:57.118148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:03.897 [2024-12-06 04:04:57.135414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:14:03.897 04:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.897 04:04:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:03.897 [2024-12-06 04:04:57.137479] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:04.835 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.835 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.835 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.835 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.835 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.835 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.835 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.835 04:04:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.835 04:04:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.835 04:04:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.835 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.835 "name": "raid_bdev1", 00:14:04.835 "uuid": "ae4b42ff-e622-4a79-ac7b-d4b2a5c397d3", 00:14:04.835 "strip_size_kb": 0, 00:14:04.835 "state": "online", 00:14:04.835 "raid_level": "raid1", 00:14:04.835 "superblock": false, 00:14:04.835 "num_base_bdevs": 2, 00:14:04.835 "num_base_bdevs_discovered": 2, 00:14:04.835 "num_base_bdevs_operational": 2, 00:14:04.835 "process": { 00:14:04.835 "type": "rebuild", 00:14:04.835 "target": "spare", 00:14:04.835 "progress": { 00:14:04.835 "blocks": 20480, 00:14:04.835 "percent": 31 00:14:04.835 } 00:14:04.835 }, 00:14:04.835 "base_bdevs_list": [ 00:14:04.835 { 00:14:04.835 "name": "spare", 00:14:04.835 "uuid": "e5258d2f-3871-5310-8839-35881936ba95", 00:14:04.835 "is_configured": true, 00:14:04.835 "data_offset": 0, 00:14:04.835 "data_size": 65536 00:14:04.835 }, 00:14:04.835 { 00:14:04.835 "name": "BaseBdev2", 00:14:04.835 "uuid": "d0ac05a1-a6c0-55ff-b689-be4e31444aa0", 00:14:04.835 "is_configured": true, 00:14:04.835 "data_offset": 0, 00:14:04.835 "data_size": 65536 00:14:04.835 } 00:14:04.835 ] 00:14:04.835 }' 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=376 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.095 "name": "raid_bdev1", 00:14:05.095 "uuid": "ae4b42ff-e622-4a79-ac7b-d4b2a5c397d3", 00:14:05.095 "strip_size_kb": 0, 00:14:05.095 "state": "online", 00:14:05.095 "raid_level": "raid1", 00:14:05.095 "superblock": false, 00:14:05.095 "num_base_bdevs": 2, 00:14:05.095 "num_base_bdevs_discovered": 2, 00:14:05.095 "num_base_bdevs_operational": 2, 00:14:05.095 "process": { 00:14:05.095 "type": "rebuild", 00:14:05.095 "target": "spare", 00:14:05.095 "progress": { 00:14:05.095 "blocks": 22528, 00:14:05.095 "percent": 34 00:14:05.095 } 00:14:05.095 }, 00:14:05.095 "base_bdevs_list": [ 00:14:05.095 { 00:14:05.095 "name": "spare", 00:14:05.095 "uuid": "e5258d2f-3871-5310-8839-35881936ba95", 00:14:05.095 "is_configured": true, 00:14:05.095 "data_offset": 0, 00:14:05.095 "data_size": 65536 00:14:05.095 }, 00:14:05.095 { 00:14:05.095 "name": "BaseBdev2", 00:14:05.095 "uuid": "d0ac05a1-a6c0-55ff-b689-be4e31444aa0", 00:14:05.095 "is_configured": true, 00:14:05.095 "data_offset": 0, 00:14:05.095 "data_size": 65536 00:14:05.095 } 00:14:05.095 ] 00:14:05.095 }' 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.095 04:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:06.473 04:04:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:06.473 04:04:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.473 04:04:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.473 04:04:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.473 04:04:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.473 04:04:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.473 04:04:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.473 04:04:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.473 04:04:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.473 04:04:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.473 04:04:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.473 04:04:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.473 "name": "raid_bdev1", 00:14:06.473 "uuid": "ae4b42ff-e622-4a79-ac7b-d4b2a5c397d3", 00:14:06.473 "strip_size_kb": 0, 00:14:06.473 "state": "online", 00:14:06.473 "raid_level": "raid1", 00:14:06.473 "superblock": false, 00:14:06.473 "num_base_bdevs": 2, 00:14:06.473 "num_base_bdevs_discovered": 2, 00:14:06.473 "num_base_bdevs_operational": 2, 00:14:06.473 "process": { 00:14:06.473 "type": "rebuild", 00:14:06.473 "target": "spare", 00:14:06.473 "progress": { 00:14:06.473 "blocks": 45056, 00:14:06.473 "percent": 68 00:14:06.473 } 00:14:06.473 }, 00:14:06.473 "base_bdevs_list": [ 00:14:06.473 { 00:14:06.473 "name": "spare", 00:14:06.473 "uuid": "e5258d2f-3871-5310-8839-35881936ba95", 00:14:06.473 "is_configured": true, 00:14:06.473 "data_offset": 0, 00:14:06.473 "data_size": 65536 00:14:06.473 }, 00:14:06.473 { 00:14:06.473 "name": "BaseBdev2", 00:14:06.473 "uuid": "d0ac05a1-a6c0-55ff-b689-be4e31444aa0", 00:14:06.473 "is_configured": true, 00:14:06.473 "data_offset": 0, 00:14:06.473 "data_size": 65536 00:14:06.473 } 00:14:06.473 ] 00:14:06.473 }' 00:14:06.473 04:04:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.473 04:04:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.473 04:04:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.473 04:04:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.473 04:04:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:07.043 [2024-12-06 04:05:00.352740] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:07.043 [2024-12-06 04:05:00.352983] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:07.043 [2024-12-06 04:05:00.353101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.303 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:07.303 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.303 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.303 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.303 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.303 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.303 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.303 04:05:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.303 04:05:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.303 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.303 04:05:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.303 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.303 "name": "raid_bdev1", 00:14:07.303 "uuid": "ae4b42ff-e622-4a79-ac7b-d4b2a5c397d3", 00:14:07.303 "strip_size_kb": 0, 00:14:07.303 "state": "online", 00:14:07.303 "raid_level": "raid1", 00:14:07.303 "superblock": false, 00:14:07.303 "num_base_bdevs": 2, 00:14:07.303 "num_base_bdevs_discovered": 2, 00:14:07.303 "num_base_bdevs_operational": 2, 00:14:07.303 "base_bdevs_list": [ 00:14:07.303 { 00:14:07.303 "name": "spare", 00:14:07.303 "uuid": "e5258d2f-3871-5310-8839-35881936ba95", 00:14:07.303 "is_configured": true, 00:14:07.303 "data_offset": 0, 00:14:07.303 "data_size": 65536 00:14:07.303 }, 00:14:07.303 { 00:14:07.303 "name": "BaseBdev2", 00:14:07.303 "uuid": "d0ac05a1-a6c0-55ff-b689-be4e31444aa0", 00:14:07.303 "is_configured": true, 00:14:07.303 "data_offset": 0, 00:14:07.303 "data_size": 65536 00:14:07.303 } 00:14:07.303 ] 00:14:07.303 }' 00:14:07.303 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.303 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.563 "name": "raid_bdev1", 00:14:07.563 "uuid": "ae4b42ff-e622-4a79-ac7b-d4b2a5c397d3", 00:14:07.563 "strip_size_kb": 0, 00:14:07.563 "state": "online", 00:14:07.563 "raid_level": "raid1", 00:14:07.563 "superblock": false, 00:14:07.563 "num_base_bdevs": 2, 00:14:07.563 "num_base_bdevs_discovered": 2, 00:14:07.563 "num_base_bdevs_operational": 2, 00:14:07.563 "base_bdevs_list": [ 00:14:07.563 { 00:14:07.563 "name": "spare", 00:14:07.563 "uuid": "e5258d2f-3871-5310-8839-35881936ba95", 00:14:07.563 "is_configured": true, 00:14:07.563 "data_offset": 0, 00:14:07.563 "data_size": 65536 00:14:07.563 }, 00:14:07.563 { 00:14:07.563 "name": "BaseBdev2", 00:14:07.563 "uuid": "d0ac05a1-a6c0-55ff-b689-be4e31444aa0", 00:14:07.563 "is_configured": true, 00:14:07.563 "data_offset": 0, 00:14:07.563 "data_size": 65536 00:14:07.563 } 00:14:07.563 ] 00:14:07.563 }' 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.563 "name": "raid_bdev1", 00:14:07.563 "uuid": "ae4b42ff-e622-4a79-ac7b-d4b2a5c397d3", 00:14:07.563 "strip_size_kb": 0, 00:14:07.563 "state": "online", 00:14:07.563 "raid_level": "raid1", 00:14:07.563 "superblock": false, 00:14:07.563 "num_base_bdevs": 2, 00:14:07.563 "num_base_bdevs_discovered": 2, 00:14:07.563 "num_base_bdevs_operational": 2, 00:14:07.563 "base_bdevs_list": [ 00:14:07.563 { 00:14:07.563 "name": "spare", 00:14:07.563 "uuid": "e5258d2f-3871-5310-8839-35881936ba95", 00:14:07.563 "is_configured": true, 00:14:07.563 "data_offset": 0, 00:14:07.563 "data_size": 65536 00:14:07.563 }, 00:14:07.563 { 00:14:07.563 "name": "BaseBdev2", 00:14:07.563 "uuid": "d0ac05a1-a6c0-55ff-b689-be4e31444aa0", 00:14:07.563 "is_configured": true, 00:14:07.563 "data_offset": 0, 00:14:07.563 "data_size": 65536 00:14:07.563 } 00:14:07.563 ] 00:14:07.563 }' 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.563 04:05:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.138 04:05:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:08.138 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.138 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.138 [2024-12-06 04:05:01.294979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:08.138 [2024-12-06 04:05:01.295081] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:08.138 [2024-12-06 04:05:01.295200] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.138 [2024-12-06 04:05:01.295279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.138 [2024-12-06 04:05:01.295292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:08.138 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.138 04:05:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.138 04:05:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:08.138 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.138 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.138 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.138 04:05:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:08.138 04:05:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:08.138 04:05:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:08.138 04:05:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:08.138 04:05:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:08.138 04:05:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:08.138 04:05:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:08.138 04:05:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:08.138 04:05:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:08.138 04:05:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:08.138 04:05:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:08.138 04:05:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:08.138 04:05:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:08.407 /dev/nbd0 00:14:08.407 04:05:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:08.407 04:05:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:08.407 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:08.407 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:08.407 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:08.407 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:08.407 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:08.407 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:08.407 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:08.407 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:08.407 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:08.407 1+0 records in 00:14:08.407 1+0 records out 00:14:08.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511396 s, 8.0 MB/s 00:14:08.407 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.407 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:08.407 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.407 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:08.407 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:08.407 04:05:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:08.407 04:05:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:08.407 04:05:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:08.667 /dev/nbd1 00:14:08.667 04:05:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:08.667 04:05:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:08.667 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:08.667 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:08.667 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:08.667 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:08.667 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:08.667 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:08.667 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:08.667 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:08.667 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:08.667 1+0 records in 00:14:08.667 1+0 records out 00:14:08.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393829 s, 10.4 MB/s 00:14:08.667 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.667 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:08.667 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.667 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:08.667 04:05:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:08.667 04:05:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:08.667 04:05:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:08.667 04:05:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:08.927 04:05:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:08.927 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:08.927 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:08.927 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:08.927 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:08.927 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.927 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:09.187 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:09.187 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:09.187 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:09.187 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.187 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.187 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:09.187 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:09.187 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.187 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.187 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:09.448 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:09.448 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:09.448 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:09.448 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.448 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.448 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:09.448 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:09.448 04:05:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.448 04:05:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:09.448 04:05:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75423 00:14:09.448 04:05:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75423 ']' 00:14:09.448 04:05:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75423 00:14:09.448 04:05:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:09.448 04:05:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.448 04:05:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75423 00:14:09.448 killing process with pid 75423 00:14:09.448 Received shutdown signal, test time was about 60.000000 seconds 00:14:09.448 00:14:09.448 Latency(us) 00:14:09.448 [2024-12-06T04:05:02.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.448 [2024-12-06T04:05:02.802Z] =================================================================================================================== 00:14:09.448 [2024-12-06T04:05:02.802Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:09.448 04:05:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:09.448 04:05:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:09.448 04:05:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75423' 00:14:09.448 04:05:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75423 00:14:09.448 04:05:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75423 00:14:09.448 [2024-12-06 04:05:02.701068] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:10.017 [2024-12-06 04:05:03.072928] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:11.397 ************************************ 00:14:11.397 END TEST raid_rebuild_test 00:14:11.397 ************************************ 00:14:11.397 04:05:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:11.397 00:14:11.397 real 0m16.541s 00:14:11.397 user 0m18.853s 00:14:11.397 sys 0m3.033s 00:14:11.397 04:05:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:11.397 04:05:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.397 04:05:04 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:14:11.397 04:05:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:11.397 04:05:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:11.397 04:05:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:11.397 ************************************ 00:14:11.397 START TEST raid_rebuild_test_sb 00:14:11.397 ************************************ 00:14:11.397 04:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:14:11.397 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:11.397 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:11.397 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:11.397 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:11.397 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:11.397 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:11.397 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.397 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:11.397 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:11.397 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.397 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:11.397 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:11.397 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.397 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:11.398 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:11.398 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:11.398 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:11.398 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:11.398 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:11.398 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:11.398 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:11.398 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:11.398 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:11.398 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:11.398 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75865 00:14:11.398 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:11.398 04:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75865 00:14:11.398 04:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75865 ']' 00:14:11.398 04:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.398 04:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:11.398 04:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.398 04:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:11.398 04:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.398 [2024-12-06 04:05:04.554443] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:14:11.398 [2024-12-06 04:05:04.554666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75865 ] 00:14:11.398 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:11.398 Zero copy mechanism will not be used. 00:14:11.398 [2024-12-06 04:05:04.725264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.657 [2024-12-06 04:05:04.859125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.918 [2024-12-06 04:05:05.094847] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.918 [2024-12-06 04:05:05.095002] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.178 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:12.178 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:12.178 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.178 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:12.178 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.178 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.178 BaseBdev1_malloc 00:14:12.178 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.178 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:12.178 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.178 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.178 [2024-12-06 04:05:05.521066] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:12.178 [2024-12-06 04:05:05.521149] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.178 [2024-12-06 04:05:05.521178] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:12.178 [2024-12-06 04:05:05.521195] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.178 [2024-12-06 04:05:05.524215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.178 [2024-12-06 04:05:05.524260] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:12.178 BaseBdev1 00:14:12.178 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.178 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.178 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:12.178 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.178 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.435 BaseBdev2_malloc 00:14:12.435 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.435 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:12.435 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.435 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.435 [2024-12-06 04:05:05.584367] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:12.435 [2024-12-06 04:05:05.584533] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.435 [2024-12-06 04:05:05.584621] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:12.436 [2024-12-06 04:05:05.584668] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.436 [2024-12-06 04:05:05.587823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.436 [2024-12-06 04:05:05.587908] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:12.436 BaseBdev2 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.436 spare_malloc 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.436 spare_delay 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.436 [2024-12-06 04:05:05.667735] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:12.436 [2024-12-06 04:05:05.667808] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.436 [2024-12-06 04:05:05.667832] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:12.436 [2024-12-06 04:05:05.667847] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.436 [2024-12-06 04:05:05.670804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.436 [2024-12-06 04:05:05.670900] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:12.436 spare 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.436 [2024-12-06 04:05:05.675851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:12.436 [2024-12-06 04:05:05.678548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:12.436 [2024-12-06 04:05:05.678798] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:12.436 [2024-12-06 04:05:05.678855] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:12.436 [2024-12-06 04:05:05.679187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:12.436 [2024-12-06 04:05:05.679427] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:12.436 [2024-12-06 04:05:05.679473] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:12.436 [2024-12-06 04:05:05.679710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.436 "name": "raid_bdev1", 00:14:12.436 "uuid": "bff79d84-8a8b-4e57-a220-f6de654f12ab", 00:14:12.436 "strip_size_kb": 0, 00:14:12.436 "state": "online", 00:14:12.436 "raid_level": "raid1", 00:14:12.436 "superblock": true, 00:14:12.436 "num_base_bdevs": 2, 00:14:12.436 "num_base_bdevs_discovered": 2, 00:14:12.436 "num_base_bdevs_operational": 2, 00:14:12.436 "base_bdevs_list": [ 00:14:12.436 { 00:14:12.436 "name": "BaseBdev1", 00:14:12.436 "uuid": "2a546b4d-995f-5ddf-817f-662afaf4e203", 00:14:12.436 "is_configured": true, 00:14:12.436 "data_offset": 2048, 00:14:12.436 "data_size": 63488 00:14:12.436 }, 00:14:12.436 { 00:14:12.436 "name": "BaseBdev2", 00:14:12.436 "uuid": "f43eaabf-9545-59b5-b34e-03cfe4d7076b", 00:14:12.436 "is_configured": true, 00:14:12.436 "data_offset": 2048, 00:14:12.436 "data_size": 63488 00:14:12.436 } 00:14:12.436 ] 00:14:12.436 }' 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.436 04:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.012 [2024-12-06 04:05:06.103478] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:13.012 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:13.271 [2024-12-06 04:05:06.390703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:13.271 /dev/nbd0 00:14:13.271 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:13.271 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:13.271 04:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:13.271 04:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:13.271 04:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:13.271 04:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:13.271 04:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:13.271 04:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:13.271 04:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:13.271 04:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:13.271 04:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.271 1+0 records in 00:14:13.271 1+0 records out 00:14:13.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276008 s, 14.8 MB/s 00:14:13.271 04:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.271 04:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:13.271 04:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.271 04:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:13.271 04:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:13.271 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.271 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:13.271 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:13.271 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:13.271 04:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:18.582 63488+0 records in 00:14:18.582 63488+0 records out 00:14:18.582 32505856 bytes (33 MB, 31 MiB) copied, 4.55271 s, 7.1 MB/s 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:18.582 [2024-12-06 04:05:11.256299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.582 [2024-12-06 04:05:11.272616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.582 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.582 "name": "raid_bdev1", 00:14:18.582 "uuid": "bff79d84-8a8b-4e57-a220-f6de654f12ab", 00:14:18.582 "strip_size_kb": 0, 00:14:18.582 "state": "online", 00:14:18.582 "raid_level": "raid1", 00:14:18.582 "superblock": true, 00:14:18.582 "num_base_bdevs": 2, 00:14:18.582 "num_base_bdevs_discovered": 1, 00:14:18.582 "num_base_bdevs_operational": 1, 00:14:18.582 "base_bdevs_list": [ 00:14:18.582 { 00:14:18.582 "name": null, 00:14:18.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.582 "is_configured": false, 00:14:18.582 "data_offset": 0, 00:14:18.582 "data_size": 63488 00:14:18.582 }, 00:14:18.582 { 00:14:18.582 "name": "BaseBdev2", 00:14:18.582 "uuid": "f43eaabf-9545-59b5-b34e-03cfe4d7076b", 00:14:18.582 "is_configured": true, 00:14:18.582 "data_offset": 2048, 00:14:18.583 "data_size": 63488 00:14:18.583 } 00:14:18.583 ] 00:14:18.583 }' 00:14:18.583 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.583 04:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.583 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:18.583 04:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.583 04:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.583 [2024-12-06 04:05:11.743875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:18.583 [2024-12-06 04:05:11.763095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:14:18.583 04:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.583 04:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:18.583 [2024-12-06 04:05:11.765797] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:19.528 04:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.528 04:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.528 04:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.528 04:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.528 04:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.528 04:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.528 04:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.528 04:05:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.528 04:05:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.528 04:05:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.528 04:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.528 "name": "raid_bdev1", 00:14:19.528 "uuid": "bff79d84-8a8b-4e57-a220-f6de654f12ab", 00:14:19.528 "strip_size_kb": 0, 00:14:19.528 "state": "online", 00:14:19.528 "raid_level": "raid1", 00:14:19.528 "superblock": true, 00:14:19.528 "num_base_bdevs": 2, 00:14:19.528 "num_base_bdevs_discovered": 2, 00:14:19.528 "num_base_bdevs_operational": 2, 00:14:19.528 "process": { 00:14:19.528 "type": "rebuild", 00:14:19.528 "target": "spare", 00:14:19.528 "progress": { 00:14:19.528 "blocks": 20480, 00:14:19.528 "percent": 32 00:14:19.528 } 00:14:19.528 }, 00:14:19.528 "base_bdevs_list": [ 00:14:19.528 { 00:14:19.528 "name": "spare", 00:14:19.528 "uuid": "cbe8e37d-7332-53cc-8967-66a2a1046daa", 00:14:19.528 "is_configured": true, 00:14:19.528 "data_offset": 2048, 00:14:19.528 "data_size": 63488 00:14:19.528 }, 00:14:19.528 { 00:14:19.529 "name": "BaseBdev2", 00:14:19.529 "uuid": "f43eaabf-9545-59b5-b34e-03cfe4d7076b", 00:14:19.529 "is_configured": true, 00:14:19.529 "data_offset": 2048, 00:14:19.529 "data_size": 63488 00:14:19.529 } 00:14:19.529 ] 00:14:19.529 }' 00:14:19.529 04:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.529 04:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.529 04:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.788 04:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.788 04:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:19.788 04:05:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.788 04:05:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.788 [2024-12-06 04:05:12.916843] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:19.788 [2024-12-06 04:05:12.971674] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:19.788 [2024-12-06 04:05:12.971845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.788 [2024-12-06 04:05:12.971864] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:19.788 [2024-12-06 04:05:12.971875] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:19.788 04:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.788 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:19.788 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.788 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.788 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.788 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.788 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:19.788 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.788 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.788 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.788 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.788 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.788 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.788 04:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.788 04:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.788 04:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.788 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.788 "name": "raid_bdev1", 00:14:19.788 "uuid": "bff79d84-8a8b-4e57-a220-f6de654f12ab", 00:14:19.789 "strip_size_kb": 0, 00:14:19.789 "state": "online", 00:14:19.789 "raid_level": "raid1", 00:14:19.789 "superblock": true, 00:14:19.789 "num_base_bdevs": 2, 00:14:19.789 "num_base_bdevs_discovered": 1, 00:14:19.789 "num_base_bdevs_operational": 1, 00:14:19.789 "base_bdevs_list": [ 00:14:19.789 { 00:14:19.789 "name": null, 00:14:19.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.789 "is_configured": false, 00:14:19.789 "data_offset": 0, 00:14:19.789 "data_size": 63488 00:14:19.789 }, 00:14:19.789 { 00:14:19.789 "name": "BaseBdev2", 00:14:19.789 "uuid": "f43eaabf-9545-59b5-b34e-03cfe4d7076b", 00:14:19.789 "is_configured": true, 00:14:19.789 "data_offset": 2048, 00:14:19.789 "data_size": 63488 00:14:19.789 } 00:14:19.789 ] 00:14:19.789 }' 00:14:19.789 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.789 04:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.358 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:20.359 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.359 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:20.359 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:20.359 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.359 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.359 04:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.359 04:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.359 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.359 04:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.359 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.359 "name": "raid_bdev1", 00:14:20.359 "uuid": "bff79d84-8a8b-4e57-a220-f6de654f12ab", 00:14:20.359 "strip_size_kb": 0, 00:14:20.359 "state": "online", 00:14:20.359 "raid_level": "raid1", 00:14:20.359 "superblock": true, 00:14:20.359 "num_base_bdevs": 2, 00:14:20.359 "num_base_bdevs_discovered": 1, 00:14:20.359 "num_base_bdevs_operational": 1, 00:14:20.359 "base_bdevs_list": [ 00:14:20.359 { 00:14:20.359 "name": null, 00:14:20.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.359 "is_configured": false, 00:14:20.359 "data_offset": 0, 00:14:20.359 "data_size": 63488 00:14:20.359 }, 00:14:20.359 { 00:14:20.359 "name": "BaseBdev2", 00:14:20.359 "uuid": "f43eaabf-9545-59b5-b34e-03cfe4d7076b", 00:14:20.359 "is_configured": true, 00:14:20.359 "data_offset": 2048, 00:14:20.359 "data_size": 63488 00:14:20.359 } 00:14:20.359 ] 00:14:20.359 }' 00:14:20.359 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.359 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:20.359 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.359 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:20.359 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:20.359 04:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.359 04:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.359 [2024-12-06 04:05:13.668788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:20.359 [2024-12-06 04:05:13.685563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:14:20.359 04:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.359 04:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:20.359 [2024-12-06 04:05:13.687437] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:21.739 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.739 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.739 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.740 "name": "raid_bdev1", 00:14:21.740 "uuid": "bff79d84-8a8b-4e57-a220-f6de654f12ab", 00:14:21.740 "strip_size_kb": 0, 00:14:21.740 "state": "online", 00:14:21.740 "raid_level": "raid1", 00:14:21.740 "superblock": true, 00:14:21.740 "num_base_bdevs": 2, 00:14:21.740 "num_base_bdevs_discovered": 2, 00:14:21.740 "num_base_bdevs_operational": 2, 00:14:21.740 "process": { 00:14:21.740 "type": "rebuild", 00:14:21.740 "target": "spare", 00:14:21.740 "progress": { 00:14:21.740 "blocks": 20480, 00:14:21.740 "percent": 32 00:14:21.740 } 00:14:21.740 }, 00:14:21.740 "base_bdevs_list": [ 00:14:21.740 { 00:14:21.740 "name": "spare", 00:14:21.740 "uuid": "cbe8e37d-7332-53cc-8967-66a2a1046daa", 00:14:21.740 "is_configured": true, 00:14:21.740 "data_offset": 2048, 00:14:21.740 "data_size": 63488 00:14:21.740 }, 00:14:21.740 { 00:14:21.740 "name": "BaseBdev2", 00:14:21.740 "uuid": "f43eaabf-9545-59b5-b34e-03cfe4d7076b", 00:14:21.740 "is_configured": true, 00:14:21.740 "data_offset": 2048, 00:14:21.740 "data_size": 63488 00:14:21.740 } 00:14:21.740 ] 00:14:21.740 }' 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:21.740 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=392 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.740 "name": "raid_bdev1", 00:14:21.740 "uuid": "bff79d84-8a8b-4e57-a220-f6de654f12ab", 00:14:21.740 "strip_size_kb": 0, 00:14:21.740 "state": "online", 00:14:21.740 "raid_level": "raid1", 00:14:21.740 "superblock": true, 00:14:21.740 "num_base_bdevs": 2, 00:14:21.740 "num_base_bdevs_discovered": 2, 00:14:21.740 "num_base_bdevs_operational": 2, 00:14:21.740 "process": { 00:14:21.740 "type": "rebuild", 00:14:21.740 "target": "spare", 00:14:21.740 "progress": { 00:14:21.740 "blocks": 22528, 00:14:21.740 "percent": 35 00:14:21.740 } 00:14:21.740 }, 00:14:21.740 "base_bdevs_list": [ 00:14:21.740 { 00:14:21.740 "name": "spare", 00:14:21.740 "uuid": "cbe8e37d-7332-53cc-8967-66a2a1046daa", 00:14:21.740 "is_configured": true, 00:14:21.740 "data_offset": 2048, 00:14:21.740 "data_size": 63488 00:14:21.740 }, 00:14:21.740 { 00:14:21.740 "name": "BaseBdev2", 00:14:21.740 "uuid": "f43eaabf-9545-59b5-b34e-03cfe4d7076b", 00:14:21.740 "is_configured": true, 00:14:21.740 "data_offset": 2048, 00:14:21.740 "data_size": 63488 00:14:21.740 } 00:14:21.740 ] 00:14:21.740 }' 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.740 04:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:22.677 04:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.677 04:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.677 04:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.677 04:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.677 04:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.677 04:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.677 04:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.677 04:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.677 04:05:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.677 04:05:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.677 04:05:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.935 04:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.935 "name": "raid_bdev1", 00:14:22.935 "uuid": "bff79d84-8a8b-4e57-a220-f6de654f12ab", 00:14:22.935 "strip_size_kb": 0, 00:14:22.935 "state": "online", 00:14:22.935 "raid_level": "raid1", 00:14:22.935 "superblock": true, 00:14:22.935 "num_base_bdevs": 2, 00:14:22.935 "num_base_bdevs_discovered": 2, 00:14:22.935 "num_base_bdevs_operational": 2, 00:14:22.935 "process": { 00:14:22.935 "type": "rebuild", 00:14:22.935 "target": "spare", 00:14:22.935 "progress": { 00:14:22.935 "blocks": 47104, 00:14:22.935 "percent": 74 00:14:22.935 } 00:14:22.935 }, 00:14:22.935 "base_bdevs_list": [ 00:14:22.935 { 00:14:22.935 "name": "spare", 00:14:22.935 "uuid": "cbe8e37d-7332-53cc-8967-66a2a1046daa", 00:14:22.935 "is_configured": true, 00:14:22.935 "data_offset": 2048, 00:14:22.935 "data_size": 63488 00:14:22.935 }, 00:14:22.935 { 00:14:22.935 "name": "BaseBdev2", 00:14:22.935 "uuid": "f43eaabf-9545-59b5-b34e-03cfe4d7076b", 00:14:22.935 "is_configured": true, 00:14:22.936 "data_offset": 2048, 00:14:22.936 "data_size": 63488 00:14:22.936 } 00:14:22.936 ] 00:14:22.936 }' 00:14:22.936 04:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.936 04:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.936 04:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.936 04:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.936 04:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:23.565 [2024-12-06 04:05:16.801913] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:23.565 [2024-12-06 04:05:16.802103] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:23.565 [2024-12-06 04:05:16.802286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.824 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:23.824 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.824 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.824 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.824 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.824 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.824 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.824 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.824 04:05:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.824 04:05:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.824 04:05:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.082 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.082 "name": "raid_bdev1", 00:14:24.082 "uuid": "bff79d84-8a8b-4e57-a220-f6de654f12ab", 00:14:24.082 "strip_size_kb": 0, 00:14:24.082 "state": "online", 00:14:24.082 "raid_level": "raid1", 00:14:24.082 "superblock": true, 00:14:24.082 "num_base_bdevs": 2, 00:14:24.082 "num_base_bdevs_discovered": 2, 00:14:24.082 "num_base_bdevs_operational": 2, 00:14:24.082 "base_bdevs_list": [ 00:14:24.082 { 00:14:24.082 "name": "spare", 00:14:24.083 "uuid": "cbe8e37d-7332-53cc-8967-66a2a1046daa", 00:14:24.083 "is_configured": true, 00:14:24.083 "data_offset": 2048, 00:14:24.083 "data_size": 63488 00:14:24.083 }, 00:14:24.083 { 00:14:24.083 "name": "BaseBdev2", 00:14:24.083 "uuid": "f43eaabf-9545-59b5-b34e-03cfe4d7076b", 00:14:24.083 "is_configured": true, 00:14:24.083 "data_offset": 2048, 00:14:24.083 "data_size": 63488 00:14:24.083 } 00:14:24.083 ] 00:14:24.083 }' 00:14:24.083 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.083 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:24.083 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.083 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:24.083 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:24.083 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:24.083 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.083 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:24.083 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:24.083 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.083 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.083 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.083 04:05:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.083 04:05:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.083 04:05:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.083 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.083 "name": "raid_bdev1", 00:14:24.083 "uuid": "bff79d84-8a8b-4e57-a220-f6de654f12ab", 00:14:24.083 "strip_size_kb": 0, 00:14:24.083 "state": "online", 00:14:24.083 "raid_level": "raid1", 00:14:24.083 "superblock": true, 00:14:24.083 "num_base_bdevs": 2, 00:14:24.083 "num_base_bdevs_discovered": 2, 00:14:24.083 "num_base_bdevs_operational": 2, 00:14:24.083 "base_bdevs_list": [ 00:14:24.083 { 00:14:24.083 "name": "spare", 00:14:24.083 "uuid": "cbe8e37d-7332-53cc-8967-66a2a1046daa", 00:14:24.083 "is_configured": true, 00:14:24.083 "data_offset": 2048, 00:14:24.083 "data_size": 63488 00:14:24.083 }, 00:14:24.083 { 00:14:24.083 "name": "BaseBdev2", 00:14:24.083 "uuid": "f43eaabf-9545-59b5-b34e-03cfe4d7076b", 00:14:24.083 "is_configured": true, 00:14:24.083 "data_offset": 2048, 00:14:24.083 "data_size": 63488 00:14:24.083 } 00:14:24.083 ] 00:14:24.083 }' 00:14:24.083 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.083 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:24.083 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.342 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:24.342 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:24.343 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.343 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.343 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.343 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.343 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:24.343 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.343 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.343 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.343 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.343 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.343 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.343 04:05:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.343 04:05:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.343 04:05:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.343 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.343 "name": "raid_bdev1", 00:14:24.343 "uuid": "bff79d84-8a8b-4e57-a220-f6de654f12ab", 00:14:24.343 "strip_size_kb": 0, 00:14:24.343 "state": "online", 00:14:24.343 "raid_level": "raid1", 00:14:24.343 "superblock": true, 00:14:24.343 "num_base_bdevs": 2, 00:14:24.343 "num_base_bdevs_discovered": 2, 00:14:24.343 "num_base_bdevs_operational": 2, 00:14:24.343 "base_bdevs_list": [ 00:14:24.343 { 00:14:24.343 "name": "spare", 00:14:24.343 "uuid": "cbe8e37d-7332-53cc-8967-66a2a1046daa", 00:14:24.343 "is_configured": true, 00:14:24.343 "data_offset": 2048, 00:14:24.343 "data_size": 63488 00:14:24.343 }, 00:14:24.343 { 00:14:24.343 "name": "BaseBdev2", 00:14:24.343 "uuid": "f43eaabf-9545-59b5-b34e-03cfe4d7076b", 00:14:24.343 "is_configured": true, 00:14:24.343 "data_offset": 2048, 00:14:24.343 "data_size": 63488 00:14:24.343 } 00:14:24.343 ] 00:14:24.343 }' 00:14:24.343 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.343 04:05:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.603 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:24.603 04:05:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.603 04:05:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.603 [2024-12-06 04:05:17.902153] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:24.603 [2024-12-06 04:05:17.902238] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:24.603 [2024-12-06 04:05:17.902351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.603 [2024-12-06 04:05:17.902445] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:24.603 [2024-12-06 04:05:17.902500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:24.603 04:05:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.603 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.603 04:05:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.603 04:05:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.603 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:24.603 04:05:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.862 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:24.862 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:24.862 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:24.862 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:24.862 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:24.862 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:24.862 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:24.862 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:24.862 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:24.862 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:24.862 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:24.862 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:24.862 04:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:24.862 /dev/nbd0 00:14:25.123 04:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:25.123 04:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:25.123 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:25.123 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:25.123 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:25.123 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:25.123 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:25.123 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:25.123 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:25.123 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:25.123 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.123 1+0 records in 00:14:25.123 1+0 records out 00:14:25.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325926 s, 12.6 MB/s 00:14:25.123 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.123 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:25.123 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.123 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:25.123 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:25.123 04:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.123 04:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:25.123 04:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:25.382 /dev/nbd1 00:14:25.382 04:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:25.382 04:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:25.382 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:25.382 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:25.382 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:25.382 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:25.382 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:25.382 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:25.382 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:25.382 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:25.382 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.382 1+0 records in 00:14:25.382 1+0 records out 00:14:25.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451262 s, 9.1 MB/s 00:14:25.382 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.382 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:25.382 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.382 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:25.382 04:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:25.382 04:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.382 04:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:25.382 04:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:25.642 04:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:25.642 04:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.642 04:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:25.642 04:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:25.642 04:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:25.642 04:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.642 04:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:25.901 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:25.901 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:25.901 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:25.901 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.901 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.901 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:25.901 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:25.901 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.901 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.901 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:26.161 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:26.161 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:26.161 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:26.161 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.161 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.161 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:26.161 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:26.161 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.161 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:26.161 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.162 [2024-12-06 04:05:19.303040] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:26.162 [2024-12-06 04:05:19.303110] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.162 [2024-12-06 04:05:19.303136] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:26.162 [2024-12-06 04:05:19.303147] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.162 [2024-12-06 04:05:19.305643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.162 [2024-12-06 04:05:19.305687] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:26.162 [2024-12-06 04:05:19.305790] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:26.162 [2024-12-06 04:05:19.305849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.162 [2024-12-06 04:05:19.306059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:26.162 spare 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.162 [2024-12-06 04:05:19.405997] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:26.162 [2024-12-06 04:05:19.406085] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:26.162 [2024-12-06 04:05:19.406466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:14:26.162 [2024-12-06 04:05:19.406732] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:26.162 [2024-12-06 04:05:19.406747] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:26.162 [2024-12-06 04:05:19.406993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.162 "name": "raid_bdev1", 00:14:26.162 "uuid": "bff79d84-8a8b-4e57-a220-f6de654f12ab", 00:14:26.162 "strip_size_kb": 0, 00:14:26.162 "state": "online", 00:14:26.162 "raid_level": "raid1", 00:14:26.162 "superblock": true, 00:14:26.162 "num_base_bdevs": 2, 00:14:26.162 "num_base_bdevs_discovered": 2, 00:14:26.162 "num_base_bdevs_operational": 2, 00:14:26.162 "base_bdevs_list": [ 00:14:26.162 { 00:14:26.162 "name": "spare", 00:14:26.162 "uuid": "cbe8e37d-7332-53cc-8967-66a2a1046daa", 00:14:26.162 "is_configured": true, 00:14:26.162 "data_offset": 2048, 00:14:26.162 "data_size": 63488 00:14:26.162 }, 00:14:26.162 { 00:14:26.162 "name": "BaseBdev2", 00:14:26.162 "uuid": "f43eaabf-9545-59b5-b34e-03cfe4d7076b", 00:14:26.162 "is_configured": true, 00:14:26.162 "data_offset": 2048, 00:14:26.162 "data_size": 63488 00:14:26.162 } 00:14:26.162 ] 00:14:26.162 }' 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.162 04:05:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.730 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:26.730 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.730 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:26.730 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:26.730 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.730 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.730 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.730 04:05:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.730 04:05:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.730 04:05:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.730 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.730 "name": "raid_bdev1", 00:14:26.730 "uuid": "bff79d84-8a8b-4e57-a220-f6de654f12ab", 00:14:26.730 "strip_size_kb": 0, 00:14:26.730 "state": "online", 00:14:26.730 "raid_level": "raid1", 00:14:26.730 "superblock": true, 00:14:26.730 "num_base_bdevs": 2, 00:14:26.730 "num_base_bdevs_discovered": 2, 00:14:26.730 "num_base_bdevs_operational": 2, 00:14:26.730 "base_bdevs_list": [ 00:14:26.730 { 00:14:26.730 "name": "spare", 00:14:26.730 "uuid": "cbe8e37d-7332-53cc-8967-66a2a1046daa", 00:14:26.730 "is_configured": true, 00:14:26.730 "data_offset": 2048, 00:14:26.730 "data_size": 63488 00:14:26.730 }, 00:14:26.730 { 00:14:26.730 "name": "BaseBdev2", 00:14:26.730 "uuid": "f43eaabf-9545-59b5-b34e-03cfe4d7076b", 00:14:26.730 "is_configured": true, 00:14:26.730 "data_offset": 2048, 00:14:26.730 "data_size": 63488 00:14:26.730 } 00:14:26.730 ] 00:14:26.730 }' 00:14:26.730 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.730 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:26.730 04:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.730 [2024-12-06 04:05:20.061939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.730 04:05:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.989 04:05:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.989 04:05:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.989 "name": "raid_bdev1", 00:14:26.989 "uuid": "bff79d84-8a8b-4e57-a220-f6de654f12ab", 00:14:26.989 "strip_size_kb": 0, 00:14:26.989 "state": "online", 00:14:26.989 "raid_level": "raid1", 00:14:26.989 "superblock": true, 00:14:26.989 "num_base_bdevs": 2, 00:14:26.989 "num_base_bdevs_discovered": 1, 00:14:26.989 "num_base_bdevs_operational": 1, 00:14:26.989 "base_bdevs_list": [ 00:14:26.989 { 00:14:26.989 "name": null, 00:14:26.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.989 "is_configured": false, 00:14:26.989 "data_offset": 0, 00:14:26.989 "data_size": 63488 00:14:26.989 }, 00:14:26.989 { 00:14:26.989 "name": "BaseBdev2", 00:14:26.989 "uuid": "f43eaabf-9545-59b5-b34e-03cfe4d7076b", 00:14:26.989 "is_configured": true, 00:14:26.989 "data_offset": 2048, 00:14:26.989 "data_size": 63488 00:14:26.989 } 00:14:26.989 ] 00:14:26.989 }' 00:14:26.989 04:05:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.989 04:05:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.251 04:05:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:27.251 04:05:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.251 04:05:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.251 [2024-12-06 04:05:20.529189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:27.251 [2024-12-06 04:05:20.529419] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:27.251 [2024-12-06 04:05:20.529536] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:27.251 [2024-12-06 04:05:20.529586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:27.251 [2024-12-06 04:05:20.547876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:14:27.251 04:05:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.251 04:05:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:27.251 [2024-12-06 04:05:20.549951] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.633 "name": "raid_bdev1", 00:14:28.633 "uuid": "bff79d84-8a8b-4e57-a220-f6de654f12ab", 00:14:28.633 "strip_size_kb": 0, 00:14:28.633 "state": "online", 00:14:28.633 "raid_level": "raid1", 00:14:28.633 "superblock": true, 00:14:28.633 "num_base_bdevs": 2, 00:14:28.633 "num_base_bdevs_discovered": 2, 00:14:28.633 "num_base_bdevs_operational": 2, 00:14:28.633 "process": { 00:14:28.633 "type": "rebuild", 00:14:28.633 "target": "spare", 00:14:28.633 "progress": { 00:14:28.633 "blocks": 20480, 00:14:28.633 "percent": 32 00:14:28.633 } 00:14:28.633 }, 00:14:28.633 "base_bdevs_list": [ 00:14:28.633 { 00:14:28.633 "name": "spare", 00:14:28.633 "uuid": "cbe8e37d-7332-53cc-8967-66a2a1046daa", 00:14:28.633 "is_configured": true, 00:14:28.633 "data_offset": 2048, 00:14:28.633 "data_size": 63488 00:14:28.633 }, 00:14:28.633 { 00:14:28.633 "name": "BaseBdev2", 00:14:28.633 "uuid": "f43eaabf-9545-59b5-b34e-03cfe4d7076b", 00:14:28.633 "is_configured": true, 00:14:28.633 "data_offset": 2048, 00:14:28.633 "data_size": 63488 00:14:28.633 } 00:14:28.633 ] 00:14:28.633 }' 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.633 [2024-12-06 04:05:21.713452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:28.633 [2024-12-06 04:05:21.755607] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:28.633 [2024-12-06 04:05:21.755725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.633 [2024-12-06 04:05:21.755743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:28.633 [2024-12-06 04:05:21.755755] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.633 "name": "raid_bdev1", 00:14:28.633 "uuid": "bff79d84-8a8b-4e57-a220-f6de654f12ab", 00:14:28.633 "strip_size_kb": 0, 00:14:28.633 "state": "online", 00:14:28.633 "raid_level": "raid1", 00:14:28.633 "superblock": true, 00:14:28.633 "num_base_bdevs": 2, 00:14:28.633 "num_base_bdevs_discovered": 1, 00:14:28.633 "num_base_bdevs_operational": 1, 00:14:28.633 "base_bdevs_list": [ 00:14:28.633 { 00:14:28.633 "name": null, 00:14:28.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.633 "is_configured": false, 00:14:28.633 "data_offset": 0, 00:14:28.633 "data_size": 63488 00:14:28.633 }, 00:14:28.633 { 00:14:28.633 "name": "BaseBdev2", 00:14:28.633 "uuid": "f43eaabf-9545-59b5-b34e-03cfe4d7076b", 00:14:28.633 "is_configured": true, 00:14:28.633 "data_offset": 2048, 00:14:28.633 "data_size": 63488 00:14:28.633 } 00:14:28.633 ] 00:14:28.633 }' 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.633 04:05:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.203 04:05:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:29.203 04:05:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.203 04:05:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.203 [2024-12-06 04:05:22.274573] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:29.203 [2024-12-06 04:05:22.274709] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.203 [2024-12-06 04:05:22.274751] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:29.203 [2024-12-06 04:05:22.274782] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.203 [2024-12-06 04:05:22.275280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.203 [2024-12-06 04:05:22.275351] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:29.203 [2024-12-06 04:05:22.275497] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:29.203 [2024-12-06 04:05:22.275546] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:29.203 [2024-12-06 04:05:22.275591] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:29.203 [2024-12-06 04:05:22.275696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:29.203 [2024-12-06 04:05:22.294153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:29.203 spare 00:14:29.203 04:05:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.203 04:05:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:29.203 [2024-12-06 04:05:22.296169] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:30.140 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.140 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.140 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.140 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.140 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.141 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.141 04:05:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.141 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.141 04:05:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.141 04:05:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.141 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.141 "name": "raid_bdev1", 00:14:30.141 "uuid": "bff79d84-8a8b-4e57-a220-f6de654f12ab", 00:14:30.141 "strip_size_kb": 0, 00:14:30.141 "state": "online", 00:14:30.141 "raid_level": "raid1", 00:14:30.141 "superblock": true, 00:14:30.141 "num_base_bdevs": 2, 00:14:30.141 "num_base_bdevs_discovered": 2, 00:14:30.141 "num_base_bdevs_operational": 2, 00:14:30.141 "process": { 00:14:30.141 "type": "rebuild", 00:14:30.141 "target": "spare", 00:14:30.141 "progress": { 00:14:30.141 "blocks": 20480, 00:14:30.141 "percent": 32 00:14:30.141 } 00:14:30.141 }, 00:14:30.141 "base_bdevs_list": [ 00:14:30.141 { 00:14:30.141 "name": "spare", 00:14:30.141 "uuid": "cbe8e37d-7332-53cc-8967-66a2a1046daa", 00:14:30.141 "is_configured": true, 00:14:30.141 "data_offset": 2048, 00:14:30.141 "data_size": 63488 00:14:30.141 }, 00:14:30.141 { 00:14:30.141 "name": "BaseBdev2", 00:14:30.141 "uuid": "f43eaabf-9545-59b5-b34e-03cfe4d7076b", 00:14:30.141 "is_configured": true, 00:14:30.141 "data_offset": 2048, 00:14:30.141 "data_size": 63488 00:14:30.141 } 00:14:30.141 ] 00:14:30.141 }' 00:14:30.141 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.141 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:30.141 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.141 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.141 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:30.141 04:05:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.141 04:05:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.141 [2024-12-06 04:05:23.463786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:30.401 [2024-12-06 04:05:23.502132] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:30.401 [2024-12-06 04:05:23.502223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.401 [2024-12-06 04:05:23.502243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:30.401 [2024-12-06 04:05:23.502252] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:30.401 04:05:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.401 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:30.401 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.401 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.401 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.401 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.401 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:30.401 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.401 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.401 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.401 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.401 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.401 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.401 04:05:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.401 04:05:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.401 04:05:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.401 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.401 "name": "raid_bdev1", 00:14:30.401 "uuid": "bff79d84-8a8b-4e57-a220-f6de654f12ab", 00:14:30.401 "strip_size_kb": 0, 00:14:30.401 "state": "online", 00:14:30.401 "raid_level": "raid1", 00:14:30.401 "superblock": true, 00:14:30.401 "num_base_bdevs": 2, 00:14:30.401 "num_base_bdevs_discovered": 1, 00:14:30.401 "num_base_bdevs_operational": 1, 00:14:30.401 "base_bdevs_list": [ 00:14:30.401 { 00:14:30.401 "name": null, 00:14:30.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.401 "is_configured": false, 00:14:30.401 "data_offset": 0, 00:14:30.401 "data_size": 63488 00:14:30.401 }, 00:14:30.401 { 00:14:30.401 "name": "BaseBdev2", 00:14:30.401 "uuid": "f43eaabf-9545-59b5-b34e-03cfe4d7076b", 00:14:30.401 "is_configured": true, 00:14:30.401 "data_offset": 2048, 00:14:30.401 "data_size": 63488 00:14:30.401 } 00:14:30.401 ] 00:14:30.401 }' 00:14:30.401 04:05:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.401 04:05:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.720 04:05:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.720 04:05:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.720 04:05:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.720 04:05:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.720 04:05:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.720 04:05:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.720 04:05:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.720 04:05:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.720 04:05:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.720 04:05:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.720 04:05:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.720 "name": "raid_bdev1", 00:14:30.720 "uuid": "bff79d84-8a8b-4e57-a220-f6de654f12ab", 00:14:30.720 "strip_size_kb": 0, 00:14:30.720 "state": "online", 00:14:30.720 "raid_level": "raid1", 00:14:30.720 "superblock": true, 00:14:30.720 "num_base_bdevs": 2, 00:14:30.720 "num_base_bdevs_discovered": 1, 00:14:30.720 "num_base_bdevs_operational": 1, 00:14:30.720 "base_bdevs_list": [ 00:14:30.720 { 00:14:30.720 "name": null, 00:14:30.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.720 "is_configured": false, 00:14:30.720 "data_offset": 0, 00:14:30.720 "data_size": 63488 00:14:30.720 }, 00:14:30.720 { 00:14:30.720 "name": "BaseBdev2", 00:14:30.720 "uuid": "f43eaabf-9545-59b5-b34e-03cfe4d7076b", 00:14:30.720 "is_configured": true, 00:14:30.720 "data_offset": 2048, 00:14:30.720 "data_size": 63488 00:14:30.720 } 00:14:30.720 ] 00:14:30.720 }' 00:14:30.720 04:05:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.982 04:05:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:30.982 04:05:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.982 04:05:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.982 04:05:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:30.982 04:05:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.982 04:05:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.982 04:05:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.982 04:05:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:30.982 04:05:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.982 04:05:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.982 [2024-12-06 04:05:24.165809] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:30.982 [2024-12-06 04:05:24.165878] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.982 [2024-12-06 04:05:24.165910] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:30.982 [2024-12-06 04:05:24.165932] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.982 [2024-12-06 04:05:24.166469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.982 [2024-12-06 04:05:24.166503] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:30.982 [2024-12-06 04:05:24.166596] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:30.982 [2024-12-06 04:05:24.166616] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:30.982 [2024-12-06 04:05:24.166631] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:30.982 [2024-12-06 04:05:24.166643] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:30.982 BaseBdev1 00:14:30.982 04:05:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.982 04:05:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:31.923 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:31.923 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.923 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.923 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.923 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.923 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:31.923 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.923 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.923 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.923 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.923 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.923 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.923 04:05:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.923 04:05:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.923 04:05:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.923 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.923 "name": "raid_bdev1", 00:14:31.923 "uuid": "bff79d84-8a8b-4e57-a220-f6de654f12ab", 00:14:31.923 "strip_size_kb": 0, 00:14:31.923 "state": "online", 00:14:31.923 "raid_level": "raid1", 00:14:31.923 "superblock": true, 00:14:31.923 "num_base_bdevs": 2, 00:14:31.923 "num_base_bdevs_discovered": 1, 00:14:31.923 "num_base_bdevs_operational": 1, 00:14:31.923 "base_bdevs_list": [ 00:14:31.923 { 00:14:31.923 "name": null, 00:14:31.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.923 "is_configured": false, 00:14:31.923 "data_offset": 0, 00:14:31.923 "data_size": 63488 00:14:31.923 }, 00:14:31.923 { 00:14:31.923 "name": "BaseBdev2", 00:14:31.923 "uuid": "f43eaabf-9545-59b5-b34e-03cfe4d7076b", 00:14:31.923 "is_configured": true, 00:14:31.923 "data_offset": 2048, 00:14:31.923 "data_size": 63488 00:14:31.923 } 00:14:31.923 ] 00:14:31.923 }' 00:14:31.923 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.923 04:05:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.491 "name": "raid_bdev1", 00:14:32.491 "uuid": "bff79d84-8a8b-4e57-a220-f6de654f12ab", 00:14:32.491 "strip_size_kb": 0, 00:14:32.491 "state": "online", 00:14:32.491 "raid_level": "raid1", 00:14:32.491 "superblock": true, 00:14:32.491 "num_base_bdevs": 2, 00:14:32.491 "num_base_bdevs_discovered": 1, 00:14:32.491 "num_base_bdevs_operational": 1, 00:14:32.491 "base_bdevs_list": [ 00:14:32.491 { 00:14:32.491 "name": null, 00:14:32.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.491 "is_configured": false, 00:14:32.491 "data_offset": 0, 00:14:32.491 "data_size": 63488 00:14:32.491 }, 00:14:32.491 { 00:14:32.491 "name": "BaseBdev2", 00:14:32.491 "uuid": "f43eaabf-9545-59b5-b34e-03cfe4d7076b", 00:14:32.491 "is_configured": true, 00:14:32.491 "data_offset": 2048, 00:14:32.491 "data_size": 63488 00:14:32.491 } 00:14:32.491 ] 00:14:32.491 }' 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.491 [2024-12-06 04:05:25.791205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:32.491 [2024-12-06 04:05:25.791392] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:32.491 [2024-12-06 04:05:25.791435] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:32.491 request: 00:14:32.491 { 00:14:32.491 "base_bdev": "BaseBdev1", 00:14:32.491 "raid_bdev": "raid_bdev1", 00:14:32.491 "method": "bdev_raid_add_base_bdev", 00:14:32.491 "req_id": 1 00:14:32.491 } 00:14:32.491 Got JSON-RPC error response 00:14:32.491 response: 00:14:32.491 { 00:14:32.491 "code": -22, 00:14:32.491 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:32.491 } 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:32.491 04:05:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:33.872 04:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:33.872 04:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.872 04:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.872 04:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.872 04:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.872 04:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:33.872 04:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.872 04:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.872 04:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.872 04:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.872 04:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.872 04:05:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.872 04:05:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.872 04:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.872 04:05:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.872 04:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.872 "name": "raid_bdev1", 00:14:33.872 "uuid": "bff79d84-8a8b-4e57-a220-f6de654f12ab", 00:14:33.872 "strip_size_kb": 0, 00:14:33.872 "state": "online", 00:14:33.872 "raid_level": "raid1", 00:14:33.872 "superblock": true, 00:14:33.872 "num_base_bdevs": 2, 00:14:33.872 "num_base_bdevs_discovered": 1, 00:14:33.872 "num_base_bdevs_operational": 1, 00:14:33.872 "base_bdevs_list": [ 00:14:33.872 { 00:14:33.872 "name": null, 00:14:33.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.872 "is_configured": false, 00:14:33.872 "data_offset": 0, 00:14:33.872 "data_size": 63488 00:14:33.872 }, 00:14:33.872 { 00:14:33.872 "name": "BaseBdev2", 00:14:33.872 "uuid": "f43eaabf-9545-59b5-b34e-03cfe4d7076b", 00:14:33.872 "is_configured": true, 00:14:33.872 "data_offset": 2048, 00:14:33.872 "data_size": 63488 00:14:33.872 } 00:14:33.872 ] 00:14:33.872 }' 00:14:33.872 04:05:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.872 04:05:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.132 "name": "raid_bdev1", 00:14:34.132 "uuid": "bff79d84-8a8b-4e57-a220-f6de654f12ab", 00:14:34.132 "strip_size_kb": 0, 00:14:34.132 "state": "online", 00:14:34.132 "raid_level": "raid1", 00:14:34.132 "superblock": true, 00:14:34.132 "num_base_bdevs": 2, 00:14:34.132 "num_base_bdevs_discovered": 1, 00:14:34.132 "num_base_bdevs_operational": 1, 00:14:34.132 "base_bdevs_list": [ 00:14:34.132 { 00:14:34.132 "name": null, 00:14:34.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.132 "is_configured": false, 00:14:34.132 "data_offset": 0, 00:14:34.132 "data_size": 63488 00:14:34.132 }, 00:14:34.132 { 00:14:34.132 "name": "BaseBdev2", 00:14:34.132 "uuid": "f43eaabf-9545-59b5-b34e-03cfe4d7076b", 00:14:34.132 "is_configured": true, 00:14:34.132 "data_offset": 2048, 00:14:34.132 "data_size": 63488 00:14:34.132 } 00:14:34.132 ] 00:14:34.132 }' 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75865 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75865 ']' 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75865 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75865 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:34.132 killing process with pid 75865 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75865' 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75865 00:14:34.132 Received shutdown signal, test time was about 60.000000 seconds 00:14:34.132 00:14:34.132 Latency(us) 00:14:34.132 [2024-12-06T04:05:27.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.132 [2024-12-06T04:05:27.486Z] =================================================================================================================== 00:14:34.132 [2024-12-06T04:05:27.486Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:34.132 [2024-12-06 04:05:27.430877] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:34.132 [2024-12-06 04:05:27.431016] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.132 [2024-12-06 04:05:27.431094] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.132 [2024-12-06 04:05:27.431109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:34.132 04:05:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75865 00:14:34.702 [2024-12-06 04:05:27.784224] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:36.080 00:14:36.080 real 0m24.584s 00:14:36.080 user 0m29.945s 00:14:36.080 sys 0m4.096s 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.080 ************************************ 00:14:36.080 END TEST raid_rebuild_test_sb 00:14:36.080 ************************************ 00:14:36.080 04:05:29 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:14:36.080 04:05:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:36.080 04:05:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:36.080 04:05:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:36.080 ************************************ 00:14:36.080 START TEST raid_rebuild_test_io 00:14:36.080 ************************************ 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76616 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76616 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76616 ']' 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.080 04:05:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.081 04:05:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.081 04:05:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.081 04:05:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.081 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:36.081 Zero copy mechanism will not be used. 00:14:36.081 [2024-12-06 04:05:29.264125] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:14:36.081 [2024-12-06 04:05:29.264248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76616 ] 00:14:36.340 [2024-12-06 04:05:29.438832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.340 [2024-12-06 04:05:29.575027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.599 [2024-12-06 04:05:29.814530] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.599 [2024-12-06 04:05:29.814604] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.858 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.858 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:36.858 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.858 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:36.858 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.858 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.858 BaseBdev1_malloc 00:14:36.858 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.858 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:36.858 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.858 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.858 [2024-12-06 04:05:30.202887] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:36.858 [2024-12-06 04:05:30.202960] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.858 [2024-12-06 04:05:30.202986] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:36.858 [2024-12-06 04:05:30.203000] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.858 [2024-12-06 04:05:30.205449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.858 [2024-12-06 04:05:30.205500] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:36.858 BaseBdev1 00:14:36.858 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.858 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.858 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:36.858 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.858 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.118 BaseBdev2_malloc 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.118 [2024-12-06 04:05:30.263558] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:37.118 [2024-12-06 04:05:30.263647] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.118 [2024-12-06 04:05:30.263680] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:37.118 [2024-12-06 04:05:30.263698] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.118 [2024-12-06 04:05:30.266226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.118 [2024-12-06 04:05:30.266276] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:37.118 BaseBdev2 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.118 spare_malloc 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.118 spare_delay 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.118 [2024-12-06 04:05:30.351307] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:37.118 [2024-12-06 04:05:30.351390] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.118 [2024-12-06 04:05:30.351420] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:37.118 [2024-12-06 04:05:30.351437] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.118 [2024-12-06 04:05:30.353980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.118 [2024-12-06 04:05:30.354034] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:37.118 spare 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.118 [2024-12-06 04:05:30.363393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.118 [2024-12-06 04:05:30.365569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.118 [2024-12-06 04:05:30.365714] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:37.118 [2024-12-06 04:05:30.365744] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:37.118 [2024-12-06 04:05:30.366127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:37.118 [2024-12-06 04:05:30.366367] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:37.118 [2024-12-06 04:05:30.366392] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:37.118 [2024-12-06 04:05:30.366609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.118 "name": "raid_bdev1", 00:14:37.118 "uuid": "0d1efd96-7237-4b7d-85d2-3ca9c341bc04", 00:14:37.118 "strip_size_kb": 0, 00:14:37.118 "state": "online", 00:14:37.118 "raid_level": "raid1", 00:14:37.118 "superblock": false, 00:14:37.118 "num_base_bdevs": 2, 00:14:37.118 "num_base_bdevs_discovered": 2, 00:14:37.118 "num_base_bdevs_operational": 2, 00:14:37.118 "base_bdevs_list": [ 00:14:37.118 { 00:14:37.118 "name": "BaseBdev1", 00:14:37.118 "uuid": "7c0c4d81-753d-5da5-8f17-06cb019deb03", 00:14:37.118 "is_configured": true, 00:14:37.118 "data_offset": 0, 00:14:37.118 "data_size": 65536 00:14:37.118 }, 00:14:37.118 { 00:14:37.118 "name": "BaseBdev2", 00:14:37.118 "uuid": "38576e8a-e17b-542d-950b-4e0a467a7d29", 00:14:37.118 "is_configured": true, 00:14:37.118 "data_offset": 0, 00:14:37.118 "data_size": 65536 00:14:37.118 } 00:14:37.118 ] 00:14:37.118 }' 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.118 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.685 [2024-12-06 04:05:30.878836] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.685 [2024-12-06 04:05:30.958376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.685 04:05:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.685 04:05:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.685 "name": "raid_bdev1", 00:14:37.685 "uuid": "0d1efd96-7237-4b7d-85d2-3ca9c341bc04", 00:14:37.685 "strip_size_kb": 0, 00:14:37.685 "state": "online", 00:14:37.685 "raid_level": "raid1", 00:14:37.685 "superblock": false, 00:14:37.685 "num_base_bdevs": 2, 00:14:37.685 "num_base_bdevs_discovered": 1, 00:14:37.685 "num_base_bdevs_operational": 1, 00:14:37.685 "base_bdevs_list": [ 00:14:37.685 { 00:14:37.685 "name": null, 00:14:37.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.685 "is_configured": false, 00:14:37.686 "data_offset": 0, 00:14:37.686 "data_size": 65536 00:14:37.686 }, 00:14:37.686 { 00:14:37.686 "name": "BaseBdev2", 00:14:37.686 "uuid": "38576e8a-e17b-542d-950b-4e0a467a7d29", 00:14:37.686 "is_configured": true, 00:14:37.686 "data_offset": 0, 00:14:37.686 "data_size": 65536 00:14:37.686 } 00:14:37.686 ] 00:14:37.686 }' 00:14:37.686 04:05:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.686 04:05:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.944 [2024-12-06 04:05:31.079338] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:37.944 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:37.944 Zero copy mechanism will not be used. 00:14:37.944 Running I/O for 60 seconds... 00:14:38.203 04:05:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:38.203 04:05:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.203 04:05:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.203 [2024-12-06 04:05:31.484663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.203 04:05:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.203 04:05:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:38.203 [2024-12-06 04:05:31.531040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:38.203 [2024-12-06 04:05:31.533291] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:38.464 [2024-12-06 04:05:31.647737] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:38.464 [2024-12-06 04:05:31.648403] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:38.723 [2024-12-06 04:05:31.879926] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:38.723 [2024-12-06 04:05:31.880355] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:38.982 128.00 IOPS, 384.00 MiB/s [2024-12-06T04:05:32.336Z] [2024-12-06 04:05:32.255655] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:39.254 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.255 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.255 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.255 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.255 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.255 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.255 04:05:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.255 04:05:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.255 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.255 04:05:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.255 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.255 "name": "raid_bdev1", 00:14:39.255 "uuid": "0d1efd96-7237-4b7d-85d2-3ca9c341bc04", 00:14:39.255 "strip_size_kb": 0, 00:14:39.255 "state": "online", 00:14:39.255 "raid_level": "raid1", 00:14:39.255 "superblock": false, 00:14:39.255 "num_base_bdevs": 2, 00:14:39.255 "num_base_bdevs_discovered": 2, 00:14:39.255 "num_base_bdevs_operational": 2, 00:14:39.255 "process": { 00:14:39.255 "type": "rebuild", 00:14:39.255 "target": "spare", 00:14:39.255 "progress": { 00:14:39.255 "blocks": 10240, 00:14:39.255 "percent": 15 00:14:39.255 } 00:14:39.255 }, 00:14:39.255 "base_bdevs_list": [ 00:14:39.255 { 00:14:39.255 "name": "spare", 00:14:39.255 "uuid": "33bb198f-6442-5efe-8ee7-36ae4699730f", 00:14:39.255 "is_configured": true, 00:14:39.255 "data_offset": 0, 00:14:39.255 "data_size": 65536 00:14:39.255 }, 00:14:39.255 { 00:14:39.255 "name": "BaseBdev2", 00:14:39.255 "uuid": "38576e8a-e17b-542d-950b-4e0a467a7d29", 00:14:39.255 "is_configured": true, 00:14:39.255 "data_offset": 0, 00:14:39.255 "data_size": 65536 00:14:39.255 } 00:14:39.255 ] 00:14:39.255 }' 00:14:39.255 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.520 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.520 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.520 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.520 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:39.520 04:05:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.520 04:05:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.520 [2024-12-06 04:05:32.677299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:39.520 [2024-12-06 04:05:32.721597] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:39.520 [2024-12-06 04:05:32.821392] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:39.520 [2024-12-06 04:05:32.839085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.520 [2024-12-06 04:05:32.839170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:39.520 [2024-12-06 04:05:32.839192] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:39.779 [2024-12-06 04:05:32.883238] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:39.779 04:05:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.779 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:39.779 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.779 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.779 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.779 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.779 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:39.779 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.779 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.779 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.779 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.779 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.779 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.779 04:05:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.779 04:05:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.779 04:05:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.779 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.779 "name": "raid_bdev1", 00:14:39.779 "uuid": "0d1efd96-7237-4b7d-85d2-3ca9c341bc04", 00:14:39.779 "strip_size_kb": 0, 00:14:39.779 "state": "online", 00:14:39.779 "raid_level": "raid1", 00:14:39.779 "superblock": false, 00:14:39.779 "num_base_bdevs": 2, 00:14:39.779 "num_base_bdevs_discovered": 1, 00:14:39.779 "num_base_bdevs_operational": 1, 00:14:39.779 "base_bdevs_list": [ 00:14:39.779 { 00:14:39.779 "name": null, 00:14:39.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.779 "is_configured": false, 00:14:39.779 "data_offset": 0, 00:14:39.779 "data_size": 65536 00:14:39.779 }, 00:14:39.779 { 00:14:39.779 "name": "BaseBdev2", 00:14:39.779 "uuid": "38576e8a-e17b-542d-950b-4e0a467a7d29", 00:14:39.779 "is_configured": true, 00:14:39.779 "data_offset": 0, 00:14:39.779 "data_size": 65536 00:14:39.779 } 00:14:39.779 ] 00:14:39.779 }' 00:14:39.779 04:05:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.779 04:05:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.038 123.00 IOPS, 369.00 MiB/s [2024-12-06T04:05:33.392Z] 04:05:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:40.038 04:05:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.038 04:05:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:40.038 04:05:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:40.038 04:05:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.038 04:05:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.038 04:05:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.038 04:05:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.038 04:05:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.038 04:05:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.038 04:05:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.038 "name": "raid_bdev1", 00:14:40.038 "uuid": "0d1efd96-7237-4b7d-85d2-3ca9c341bc04", 00:14:40.038 "strip_size_kb": 0, 00:14:40.038 "state": "online", 00:14:40.038 "raid_level": "raid1", 00:14:40.038 "superblock": false, 00:14:40.038 "num_base_bdevs": 2, 00:14:40.038 "num_base_bdevs_discovered": 1, 00:14:40.038 "num_base_bdevs_operational": 1, 00:14:40.038 "base_bdevs_list": [ 00:14:40.038 { 00:14:40.038 "name": null, 00:14:40.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.038 "is_configured": false, 00:14:40.038 "data_offset": 0, 00:14:40.038 "data_size": 65536 00:14:40.038 }, 00:14:40.038 { 00:14:40.038 "name": "BaseBdev2", 00:14:40.038 "uuid": "38576e8a-e17b-542d-950b-4e0a467a7d29", 00:14:40.038 "is_configured": true, 00:14:40.038 "data_offset": 0, 00:14:40.038 "data_size": 65536 00:14:40.038 } 00:14:40.038 ] 00:14:40.038 }' 00:14:40.038 04:05:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.297 04:05:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:40.297 04:05:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.297 04:05:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:40.297 04:05:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:40.297 04:05:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.297 04:05:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.297 [2024-12-06 04:05:33.449709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:40.297 04:05:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.297 04:05:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:40.297 [2024-12-06 04:05:33.512172] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:40.297 [2024-12-06 04:05:33.514368] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:40.297 [2024-12-06 04:05:33.641637] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:40.297 [2024-12-06 04:05:33.642303] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:40.556 [2024-12-06 04:05:33.853246] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:40.556 [2024-12-06 04:05:33.853645] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:41.073 143.67 IOPS, 431.00 MiB/s [2024-12-06T04:05:34.427Z] [2024-12-06 04:05:34.176196] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:41.073 [2024-12-06 04:05:34.176879] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:41.073 [2024-12-06 04:05:34.386799] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:41.073 [2024-12-06 04:05:34.387283] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.331 "name": "raid_bdev1", 00:14:41.331 "uuid": "0d1efd96-7237-4b7d-85d2-3ca9c341bc04", 00:14:41.331 "strip_size_kb": 0, 00:14:41.331 "state": "online", 00:14:41.331 "raid_level": "raid1", 00:14:41.331 "superblock": false, 00:14:41.331 "num_base_bdevs": 2, 00:14:41.331 "num_base_bdevs_discovered": 2, 00:14:41.331 "num_base_bdevs_operational": 2, 00:14:41.331 "process": { 00:14:41.331 "type": "rebuild", 00:14:41.331 "target": "spare", 00:14:41.331 "progress": { 00:14:41.331 "blocks": 12288, 00:14:41.331 "percent": 18 00:14:41.331 } 00:14:41.331 }, 00:14:41.331 "base_bdevs_list": [ 00:14:41.331 { 00:14:41.331 "name": "spare", 00:14:41.331 "uuid": "33bb198f-6442-5efe-8ee7-36ae4699730f", 00:14:41.331 "is_configured": true, 00:14:41.331 "data_offset": 0, 00:14:41.331 "data_size": 65536 00:14:41.331 }, 00:14:41.331 { 00:14:41.331 "name": "BaseBdev2", 00:14:41.331 "uuid": "38576e8a-e17b-542d-950b-4e0a467a7d29", 00:14:41.331 "is_configured": true, 00:14:41.331 "data_offset": 0, 00:14:41.331 "data_size": 65536 00:14:41.331 } 00:14:41.331 ] 00:14:41.331 }' 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=412 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.331 [2024-12-06 04:05:34.635358] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.331 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.331 "name": "raid_bdev1", 00:14:41.331 "uuid": "0d1efd96-7237-4b7d-85d2-3ca9c341bc04", 00:14:41.331 "strip_size_kb": 0, 00:14:41.331 "state": "online", 00:14:41.331 "raid_level": "raid1", 00:14:41.331 "superblock": false, 00:14:41.331 "num_base_bdevs": 2, 00:14:41.331 "num_base_bdevs_discovered": 2, 00:14:41.331 "num_base_bdevs_operational": 2, 00:14:41.331 "process": { 00:14:41.331 "type": "rebuild", 00:14:41.331 "target": "spare", 00:14:41.331 "progress": { 00:14:41.331 "blocks": 14336, 00:14:41.331 "percent": 21 00:14:41.331 } 00:14:41.331 }, 00:14:41.332 "base_bdevs_list": [ 00:14:41.332 { 00:14:41.332 "name": "spare", 00:14:41.332 "uuid": "33bb198f-6442-5efe-8ee7-36ae4699730f", 00:14:41.332 "is_configured": true, 00:14:41.332 "data_offset": 0, 00:14:41.332 "data_size": 65536 00:14:41.332 }, 00:14:41.332 { 00:14:41.332 "name": "BaseBdev2", 00:14:41.332 "uuid": "38576e8a-e17b-542d-950b-4e0a467a7d29", 00:14:41.332 "is_configured": true, 00:14:41.332 "data_offset": 0, 00:14:41.332 "data_size": 65536 00:14:41.332 } 00:14:41.332 ] 00:14:41.332 }' 00:14:41.332 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.590 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.590 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.590 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.590 04:05:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:41.590 [2024-12-06 04:05:34.859188] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:41.590 [2024-12-06 04:05:34.859673] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:41.848 121.50 IOPS, 364.50 MiB/s [2024-12-06T04:05:35.202Z] [2024-12-06 04:05:35.105555] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:42.105 [2024-12-06 04:05:35.339371] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:42.671 04:05:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.671 04:05:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.671 04:05:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.671 04:05:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.671 04:05:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.671 04:05:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.671 04:05:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.671 04:05:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.671 04:05:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.671 04:05:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.671 04:05:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.671 04:05:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.671 "name": "raid_bdev1", 00:14:42.671 "uuid": "0d1efd96-7237-4b7d-85d2-3ca9c341bc04", 00:14:42.671 "strip_size_kb": 0, 00:14:42.671 "state": "online", 00:14:42.671 "raid_level": "raid1", 00:14:42.671 "superblock": false, 00:14:42.671 "num_base_bdevs": 2, 00:14:42.671 "num_base_bdevs_discovered": 2, 00:14:42.671 "num_base_bdevs_operational": 2, 00:14:42.671 "process": { 00:14:42.671 "type": "rebuild", 00:14:42.671 "target": "spare", 00:14:42.671 "progress": { 00:14:42.671 "blocks": 26624, 00:14:42.671 "percent": 40 00:14:42.671 } 00:14:42.671 }, 00:14:42.671 "base_bdevs_list": [ 00:14:42.671 { 00:14:42.671 "name": "spare", 00:14:42.671 "uuid": "33bb198f-6442-5efe-8ee7-36ae4699730f", 00:14:42.671 "is_configured": true, 00:14:42.671 "data_offset": 0, 00:14:42.671 "data_size": 65536 00:14:42.671 }, 00:14:42.671 { 00:14:42.671 "name": "BaseBdev2", 00:14:42.671 "uuid": "38576e8a-e17b-542d-950b-4e0a467a7d29", 00:14:42.671 "is_configured": true, 00:14:42.671 "data_offset": 0, 00:14:42.671 "data_size": 65536 00:14:42.671 } 00:14:42.671 ] 00:14:42.671 }' 00:14:42.671 04:05:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.671 04:05:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.671 04:05:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.671 04:05:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.671 04:05:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:43.228 108.00 IOPS, 324.00 MiB/s [2024-12-06T04:05:36.582Z] [2024-12-06 04:05:36.351172] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:43.228 [2024-12-06 04:05:36.570801] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:43.795 04:05:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:43.795 04:05:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.795 04:05:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.795 04:05:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.795 04:05:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.795 04:05:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.795 04:05:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.795 04:05:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.795 04:05:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.795 04:05:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.795 04:05:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.795 04:05:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.795 "name": "raid_bdev1", 00:14:43.795 "uuid": "0d1efd96-7237-4b7d-85d2-3ca9c341bc04", 00:14:43.795 "strip_size_kb": 0, 00:14:43.795 "state": "online", 00:14:43.795 "raid_level": "raid1", 00:14:43.795 "superblock": false, 00:14:43.795 "num_base_bdevs": 2, 00:14:43.795 "num_base_bdevs_discovered": 2, 00:14:43.795 "num_base_bdevs_operational": 2, 00:14:43.795 "process": { 00:14:43.795 "type": "rebuild", 00:14:43.795 "target": "spare", 00:14:43.795 "progress": { 00:14:43.795 "blocks": 43008, 00:14:43.795 "percent": 65 00:14:43.795 } 00:14:43.795 }, 00:14:43.795 "base_bdevs_list": [ 00:14:43.795 { 00:14:43.795 "name": "spare", 00:14:43.795 "uuid": "33bb198f-6442-5efe-8ee7-36ae4699730f", 00:14:43.795 "is_configured": true, 00:14:43.795 "data_offset": 0, 00:14:43.795 "data_size": 65536 00:14:43.795 }, 00:14:43.795 { 00:14:43.795 "name": "BaseBdev2", 00:14:43.795 "uuid": "38576e8a-e17b-542d-950b-4e0a467a7d29", 00:14:43.795 "is_configured": true, 00:14:43.795 "data_offset": 0, 00:14:43.795 "data_size": 65536 00:14:43.795 } 00:14:43.795 ] 00:14:43.795 }' 00:14:43.795 04:05:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.795 04:05:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.795 04:05:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.795 04:05:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.795 04:05:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:43.795 [2024-12-06 04:05:37.050051] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:44.730 95.67 IOPS, 287.00 MiB/s [2024-12-06T04:05:38.084Z] [2024-12-06 04:05:38.030761] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:44.730 04:05:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:44.730 04:05:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.730 04:05:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.730 04:05:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.730 04:05:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.730 04:05:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.730 04:05:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.730 04:05:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.730 04:05:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.730 04:05:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.730 04:05:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.989 04:05:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.989 "name": "raid_bdev1", 00:14:44.989 "uuid": "0d1efd96-7237-4b7d-85d2-3ca9c341bc04", 00:14:44.989 "strip_size_kb": 0, 00:14:44.989 "state": "online", 00:14:44.989 "raid_level": "raid1", 00:14:44.989 "superblock": false, 00:14:44.989 "num_base_bdevs": 2, 00:14:44.989 "num_base_bdevs_discovered": 2, 00:14:44.989 "num_base_bdevs_operational": 2, 00:14:44.989 "process": { 00:14:44.989 "type": "rebuild", 00:14:44.989 "target": "spare", 00:14:44.989 "progress": { 00:14:44.989 "blocks": 65536, 00:14:44.989 "percent": 100 00:14:44.989 } 00:14:44.989 }, 00:14:44.989 "base_bdevs_list": [ 00:14:44.989 { 00:14:44.989 "name": "spare", 00:14:44.989 "uuid": "33bb198f-6442-5efe-8ee7-36ae4699730f", 00:14:44.989 "is_configured": true, 00:14:44.989 "data_offset": 0, 00:14:44.989 "data_size": 65536 00:14:44.989 }, 00:14:44.989 { 00:14:44.989 "name": "BaseBdev2", 00:14:44.989 "uuid": "38576e8a-e17b-542d-950b-4e0a467a7d29", 00:14:44.989 "is_configured": true, 00:14:44.989 "data_offset": 0, 00:14:44.989 "data_size": 65536 00:14:44.989 } 00:14:44.989 ] 00:14:44.989 }' 00:14:44.989 85.43 IOPS, 256.29 MiB/s [2024-12-06T04:05:38.343Z] 04:05:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.989 [2024-12-06 04:05:38.130552] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:44.989 [2024-12-06 04:05:38.133005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.989 04:05:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:44.989 04:05:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.989 04:05:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:44.989 04:05:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:45.925 78.88 IOPS, 236.62 MiB/s [2024-12-06T04:05:39.279Z] 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:45.925 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.925 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.925 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.925 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.925 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.925 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.925 04:05:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.925 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.925 04:05:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.925 04:05:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.925 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.925 "name": "raid_bdev1", 00:14:45.925 "uuid": "0d1efd96-7237-4b7d-85d2-3ca9c341bc04", 00:14:45.925 "strip_size_kb": 0, 00:14:45.925 "state": "online", 00:14:45.925 "raid_level": "raid1", 00:14:45.925 "superblock": false, 00:14:45.925 "num_base_bdevs": 2, 00:14:45.925 "num_base_bdevs_discovered": 2, 00:14:45.925 "num_base_bdevs_operational": 2, 00:14:45.925 "base_bdevs_list": [ 00:14:45.925 { 00:14:45.925 "name": "spare", 00:14:45.925 "uuid": "33bb198f-6442-5efe-8ee7-36ae4699730f", 00:14:45.925 "is_configured": true, 00:14:45.925 "data_offset": 0, 00:14:45.925 "data_size": 65536 00:14:45.925 }, 00:14:45.925 { 00:14:45.925 "name": "BaseBdev2", 00:14:45.925 "uuid": "38576e8a-e17b-542d-950b-4e0a467a7d29", 00:14:45.925 "is_configured": true, 00:14:45.925 "data_offset": 0, 00:14:45.925 "data_size": 65536 00:14:45.925 } 00:14:45.925 ] 00:14:45.925 }' 00:14:45.925 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.925 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.184 "name": "raid_bdev1", 00:14:46.184 "uuid": "0d1efd96-7237-4b7d-85d2-3ca9c341bc04", 00:14:46.184 "strip_size_kb": 0, 00:14:46.184 "state": "online", 00:14:46.184 "raid_level": "raid1", 00:14:46.184 "superblock": false, 00:14:46.184 "num_base_bdevs": 2, 00:14:46.184 "num_base_bdevs_discovered": 2, 00:14:46.184 "num_base_bdevs_operational": 2, 00:14:46.184 "base_bdevs_list": [ 00:14:46.184 { 00:14:46.184 "name": "spare", 00:14:46.184 "uuid": "33bb198f-6442-5efe-8ee7-36ae4699730f", 00:14:46.184 "is_configured": true, 00:14:46.184 "data_offset": 0, 00:14:46.184 "data_size": 65536 00:14:46.184 }, 00:14:46.184 { 00:14:46.184 "name": "BaseBdev2", 00:14:46.184 "uuid": "38576e8a-e17b-542d-950b-4e0a467a7d29", 00:14:46.184 "is_configured": true, 00:14:46.184 "data_offset": 0, 00:14:46.184 "data_size": 65536 00:14:46.184 } 00:14:46.184 ] 00:14:46.184 }' 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.184 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.184 "name": "raid_bdev1", 00:14:46.184 "uuid": "0d1efd96-7237-4b7d-85d2-3ca9c341bc04", 00:14:46.184 "strip_size_kb": 0, 00:14:46.184 "state": "online", 00:14:46.184 "raid_level": "raid1", 00:14:46.184 "superblock": false, 00:14:46.184 "num_base_bdevs": 2, 00:14:46.184 "num_base_bdevs_discovered": 2, 00:14:46.184 "num_base_bdevs_operational": 2, 00:14:46.184 "base_bdevs_list": [ 00:14:46.184 { 00:14:46.184 "name": "spare", 00:14:46.184 "uuid": "33bb198f-6442-5efe-8ee7-36ae4699730f", 00:14:46.184 "is_configured": true, 00:14:46.185 "data_offset": 0, 00:14:46.185 "data_size": 65536 00:14:46.185 }, 00:14:46.185 { 00:14:46.185 "name": "BaseBdev2", 00:14:46.185 "uuid": "38576e8a-e17b-542d-950b-4e0a467a7d29", 00:14:46.185 "is_configured": true, 00:14:46.185 "data_offset": 0, 00:14:46.185 "data_size": 65536 00:14:46.185 } 00:14:46.185 ] 00:14:46.185 }' 00:14:46.185 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.185 04:05:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.752 04:05:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:46.752 04:05:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.752 04:05:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.752 [2024-12-06 04:05:39.931537] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:46.752 [2024-12-06 04:05:39.931589] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:46.752 00:14:46.752 Latency(us) 00:14:46.752 [2024-12-06T04:05:40.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.752 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:46.752 raid_bdev1 : 8.96 74.23 222.69 0.00 0.00 19018.03 352.36 137368.03 00:14:46.752 [2024-12-06T04:05:40.106Z] =================================================================================================================== 00:14:46.752 [2024-12-06T04:05:40.106Z] Total : 74.23 222.69 0.00 0.00 19018.03 352.36 137368.03 00:14:46.752 [2024-12-06 04:05:40.048602] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:46.752 { 00:14:46.752 "results": [ 00:14:46.752 { 00:14:46.752 "job": "raid_bdev1", 00:14:46.752 "core_mask": "0x1", 00:14:46.752 "workload": "randrw", 00:14:46.752 "percentage": 50, 00:14:46.752 "status": "finished", 00:14:46.752 "queue_depth": 2, 00:14:46.752 "io_size": 3145728, 00:14:46.752 "runtime": 8.958756, 00:14:46.752 "iops": 74.22905590910166, 00:14:46.752 "mibps": 222.687167727305, 00:14:46.752 "io_failed": 0, 00:14:46.752 "io_timeout": 0, 00:14:46.752 "avg_latency_us": 19018.026421512295, 00:14:46.752 "min_latency_us": 352.3633187772926, 00:14:46.752 "max_latency_us": 137368.03493449782 00:14:46.752 } 00:14:46.752 ], 00:14:46.752 "core_count": 1 00:14:46.752 } 00:14:46.752 [2024-12-06 04:05:40.048802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.752 [2024-12-06 04:05:40.048913] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:46.752 [2024-12-06 04:05:40.048927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:46.752 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.752 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.752 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.752 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.752 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:46.752 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.033 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:47.033 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:47.033 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:47.033 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:47.033 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:47.033 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:47.033 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:47.033 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:47.033 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:47.033 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:47.033 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:47.033 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.033 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:47.033 /dev/nbd0 00:14:47.033 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:47.311 1+0 records in 00:14:47.311 1+0 records out 00:14:47.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369176 s, 11.1 MB/s 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.311 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:47.311 /dev/nbd1 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:47.571 1+0 records in 00:14:47.571 1+0 records out 00:14:47.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000801536 s, 5.1 MB/s 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:47.571 04:05:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76616 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76616 ']' 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76616 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:48.140 04:05:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:48.399 04:05:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76616 00:14:48.399 04:05:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:48.399 04:05:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:48.399 killing process with pid 76616 00:14:48.399 Received shutdown signal, test time was about 10.464933 seconds 00:14:48.399 00:14:48.399 Latency(us) 00:14:48.399 [2024-12-06T04:05:41.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.399 [2024-12-06T04:05:41.753Z] =================================================================================================================== 00:14:48.399 [2024-12-06T04:05:41.753Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:48.399 04:05:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76616' 00:14:48.399 04:05:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76616 00:14:48.399 [2024-12-06 04:05:41.526606] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:48.399 04:05:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76616 00:14:48.657 [2024-12-06 04:05:41.786425] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:50.035 04:05:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:50.035 00:14:50.035 real 0m13.963s 00:14:50.035 user 0m17.428s 00:14:50.035 sys 0m1.608s 00:14:50.035 04:05:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.035 ************************************ 00:14:50.035 END TEST raid_rebuild_test_io 00:14:50.035 ************************************ 00:14:50.035 04:05:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.035 04:05:43 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:14:50.035 04:05:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:50.035 04:05:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.035 04:05:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:50.035 ************************************ 00:14:50.035 START TEST raid_rebuild_test_sb_io 00:14:50.035 ************************************ 00:14:50.035 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:14:50.035 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:50.035 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:50.035 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77038 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77038 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77038 ']' 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:50.036 04:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.036 [2024-12-06 04:05:43.319682] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:14:50.036 [2024-12-06 04:05:43.320030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77038 ] 00:14:50.036 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:50.036 Zero copy mechanism will not be used. 00:14:50.295 [2024-12-06 04:05:43.518728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.555 [2024-12-06 04:05:43.656057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.555 [2024-12-06 04:05:43.881663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.555 [2024-12-06 04:05:43.881732] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.138 BaseBdev1_malloc 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.138 [2024-12-06 04:05:44.261812] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:51.138 [2024-12-06 04:05:44.261884] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.138 [2024-12-06 04:05:44.261908] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:51.138 [2024-12-06 04:05:44.261920] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.138 [2024-12-06 04:05:44.264095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.138 [2024-12-06 04:05:44.264137] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:51.138 BaseBdev1 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.138 BaseBdev2_malloc 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.138 [2024-12-06 04:05:44.322074] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:51.138 [2024-12-06 04:05:44.322241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.138 [2024-12-06 04:05:44.322304] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:51.138 [2024-12-06 04:05:44.322351] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.138 [2024-12-06 04:05:44.324897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.138 [2024-12-06 04:05:44.325011] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:51.138 BaseBdev2 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.138 spare_malloc 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.138 spare_delay 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.138 [2024-12-06 04:05:44.406831] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:51.138 [2024-12-06 04:05:44.406979] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.138 [2024-12-06 04:05:44.407047] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:51.138 [2024-12-06 04:05:44.407152] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.138 [2024-12-06 04:05:44.409735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.138 [2024-12-06 04:05:44.409835] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:51.138 spare 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.138 [2024-12-06 04:05:44.418874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.138 [2024-12-06 04:05:44.420988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:51.138 [2024-12-06 04:05:44.421283] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:51.138 [2024-12-06 04:05:44.421341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:51.138 [2024-12-06 04:05:44.421674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:51.138 [2024-12-06 04:05:44.421906] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:51.138 [2024-12-06 04:05:44.421953] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:51.138 [2024-12-06 04:05:44.422176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.138 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.138 "name": "raid_bdev1", 00:14:51.138 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:14:51.138 "strip_size_kb": 0, 00:14:51.138 "state": "online", 00:14:51.138 "raid_level": "raid1", 00:14:51.138 "superblock": true, 00:14:51.138 "num_base_bdevs": 2, 00:14:51.138 "num_base_bdevs_discovered": 2, 00:14:51.138 "num_base_bdevs_operational": 2, 00:14:51.138 "base_bdevs_list": [ 00:14:51.138 { 00:14:51.138 "name": "BaseBdev1", 00:14:51.138 "uuid": "73350393-1478-5cbc-8c7f-54edc8d05dd2", 00:14:51.138 "is_configured": true, 00:14:51.138 "data_offset": 2048, 00:14:51.138 "data_size": 63488 00:14:51.138 }, 00:14:51.138 { 00:14:51.138 "name": "BaseBdev2", 00:14:51.138 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:14:51.138 "is_configured": true, 00:14:51.138 "data_offset": 2048, 00:14:51.139 "data_size": 63488 00:14:51.139 } 00:14:51.139 ] 00:14:51.139 }' 00:14:51.139 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.139 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.708 [2024-12-06 04:05:44.858425] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.708 [2024-12-06 04:05:44.941951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.708 04:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.708 "name": "raid_bdev1", 00:14:51.708 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:14:51.708 "strip_size_kb": 0, 00:14:51.708 "state": "online", 00:14:51.708 "raid_level": "raid1", 00:14:51.708 "superblock": true, 00:14:51.708 "num_base_bdevs": 2, 00:14:51.708 "num_base_bdevs_discovered": 1, 00:14:51.708 "num_base_bdevs_operational": 1, 00:14:51.708 "base_bdevs_list": [ 00:14:51.708 { 00:14:51.708 "name": null, 00:14:51.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.708 "is_configured": false, 00:14:51.708 "data_offset": 0, 00:14:51.708 "data_size": 63488 00:14:51.708 }, 00:14:51.708 { 00:14:51.708 "name": "BaseBdev2", 00:14:51.708 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:14:51.708 "is_configured": true, 00:14:51.708 "data_offset": 2048, 00:14:51.708 "data_size": 63488 00:14:51.708 } 00:14:51.708 ] 00:14:51.708 }' 00:14:51.708 04:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.708 04:05:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.708 [2024-12-06 04:05:45.051020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:51.708 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:51.708 Zero copy mechanism will not be used. 00:14:51.708 Running I/O for 60 seconds... 00:14:52.274 04:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:52.274 04:05:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.274 04:05:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.274 [2024-12-06 04:05:45.437166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:52.274 04:05:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.274 04:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:52.274 [2024-12-06 04:05:45.507908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:52.274 [2024-12-06 04:05:45.510258] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:52.274 [2024-12-06 04:05:45.619542] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:52.274 [2024-12-06 04:05:45.620316] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:52.533 [2024-12-06 04:05:45.846065] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:52.533 [2024-12-06 04:05:45.846518] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:53.049 182.00 IOPS, 546.00 MiB/s [2024-12-06T04:05:46.403Z] [2024-12-06 04:05:46.293332] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:53.307 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.308 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.308 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.308 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.308 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.308 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.308 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.308 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.308 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.308 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.308 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.308 "name": "raid_bdev1", 00:14:53.308 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:14:53.308 "strip_size_kb": 0, 00:14:53.308 "state": "online", 00:14:53.308 "raid_level": "raid1", 00:14:53.308 "superblock": true, 00:14:53.308 "num_base_bdevs": 2, 00:14:53.308 "num_base_bdevs_discovered": 2, 00:14:53.308 "num_base_bdevs_operational": 2, 00:14:53.308 "process": { 00:14:53.308 "type": "rebuild", 00:14:53.308 "target": "spare", 00:14:53.308 "progress": { 00:14:53.308 "blocks": 12288, 00:14:53.308 "percent": 19 00:14:53.308 } 00:14:53.308 }, 00:14:53.308 "base_bdevs_list": [ 00:14:53.308 { 00:14:53.308 "name": "spare", 00:14:53.308 "uuid": "8ff21741-4e15-54c6-ad62-f8b164a1571f", 00:14:53.308 "is_configured": true, 00:14:53.308 "data_offset": 2048, 00:14:53.308 "data_size": 63488 00:14:53.308 }, 00:14:53.308 { 00:14:53.308 "name": "BaseBdev2", 00:14:53.308 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:14:53.308 "is_configured": true, 00:14:53.308 "data_offset": 2048, 00:14:53.308 "data_size": 63488 00:14:53.308 } 00:14:53.308 ] 00:14:53.308 }' 00:14:53.308 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.308 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.308 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.308 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.308 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:53.308 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.308 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.308 [2024-12-06 04:05:46.602969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:53.308 [2024-12-06 04:05:46.649522] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:53.308 [2024-12-06 04:05:46.649846] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:53.567 [2024-12-06 04:05:46.757837] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:53.567 [2024-12-06 04:05:46.760925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.567 [2024-12-06 04:05:46.760981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:53.567 [2024-12-06 04:05:46.760996] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:53.567 [2024-12-06 04:05:46.807954] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:53.567 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.567 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:53.567 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.567 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.567 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.567 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.567 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:53.567 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.567 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.567 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.567 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.567 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.567 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.567 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.567 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.567 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.567 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.567 "name": "raid_bdev1", 00:14:53.567 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:14:53.567 "strip_size_kb": 0, 00:14:53.567 "state": "online", 00:14:53.567 "raid_level": "raid1", 00:14:53.567 "superblock": true, 00:14:53.567 "num_base_bdevs": 2, 00:14:53.567 "num_base_bdevs_discovered": 1, 00:14:53.567 "num_base_bdevs_operational": 1, 00:14:53.567 "base_bdevs_list": [ 00:14:53.567 { 00:14:53.567 "name": null, 00:14:53.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.567 "is_configured": false, 00:14:53.567 "data_offset": 0, 00:14:53.567 "data_size": 63488 00:14:53.567 }, 00:14:53.567 { 00:14:53.567 "name": "BaseBdev2", 00:14:53.567 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:14:53.567 "is_configured": true, 00:14:53.567 "data_offset": 2048, 00:14:53.567 "data_size": 63488 00:14:53.567 } 00:14:53.567 ] 00:14:53.567 }' 00:14:53.567 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.567 04:05:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.086 154.50 IOPS, 463.50 MiB/s [2024-12-06T04:05:47.440Z] 04:05:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:54.086 04:05:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.086 04:05:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:54.086 04:05:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:54.086 04:05:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.086 04:05:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.086 04:05:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.086 04:05:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.086 04:05:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.086 04:05:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.086 04:05:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.086 "name": "raid_bdev1", 00:14:54.086 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:14:54.086 "strip_size_kb": 0, 00:14:54.086 "state": "online", 00:14:54.086 "raid_level": "raid1", 00:14:54.086 "superblock": true, 00:14:54.086 "num_base_bdevs": 2, 00:14:54.086 "num_base_bdevs_discovered": 1, 00:14:54.086 "num_base_bdevs_operational": 1, 00:14:54.086 "base_bdevs_list": [ 00:14:54.086 { 00:14:54.086 "name": null, 00:14:54.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.086 "is_configured": false, 00:14:54.086 "data_offset": 0, 00:14:54.086 "data_size": 63488 00:14:54.086 }, 00:14:54.086 { 00:14:54.086 "name": "BaseBdev2", 00:14:54.086 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:14:54.086 "is_configured": true, 00:14:54.086 "data_offset": 2048, 00:14:54.086 "data_size": 63488 00:14:54.086 } 00:14:54.086 ] 00:14:54.086 }' 00:14:54.086 04:05:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.086 04:05:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:54.086 04:05:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.086 04:05:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:54.086 04:05:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:54.086 04:05:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.086 04:05:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.086 [2024-12-06 04:05:47.389200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:54.086 04:05:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.086 04:05:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:54.346 [2024-12-06 04:05:47.451622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:54.346 [2024-12-06 04:05:47.453743] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:54.346 [2024-12-06 04:05:47.588122] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:54.346 [2024-12-06 04:05:47.588744] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:54.606 [2024-12-06 04:05:47.826999] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:54.606 [2024-12-06 04:05:47.827499] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:54.867 155.33 IOPS, 466.00 MiB/s [2024-12-06T04:05:48.221Z] [2024-12-06 04:05:48.168148] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:54.867 [2024-12-06 04:05:48.168860] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:55.127 [2024-12-06 04:05:48.383706] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:55.127 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.127 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.127 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.127 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.127 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.127 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.127 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.127 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.127 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.127 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.388 "name": "raid_bdev1", 00:14:55.388 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:14:55.388 "strip_size_kb": 0, 00:14:55.388 "state": "online", 00:14:55.388 "raid_level": "raid1", 00:14:55.388 "superblock": true, 00:14:55.388 "num_base_bdevs": 2, 00:14:55.388 "num_base_bdevs_discovered": 2, 00:14:55.388 "num_base_bdevs_operational": 2, 00:14:55.388 "process": { 00:14:55.388 "type": "rebuild", 00:14:55.388 "target": "spare", 00:14:55.388 "progress": { 00:14:55.388 "blocks": 10240, 00:14:55.388 "percent": 16 00:14:55.388 } 00:14:55.388 }, 00:14:55.388 "base_bdevs_list": [ 00:14:55.388 { 00:14:55.388 "name": "spare", 00:14:55.388 "uuid": "8ff21741-4e15-54c6-ad62-f8b164a1571f", 00:14:55.388 "is_configured": true, 00:14:55.388 "data_offset": 2048, 00:14:55.388 "data_size": 63488 00:14:55.388 }, 00:14:55.388 { 00:14:55.388 "name": "BaseBdev2", 00:14:55.388 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:14:55.388 "is_configured": true, 00:14:55.388 "data_offset": 2048, 00:14:55.388 "data_size": 63488 00:14:55.388 } 00:14:55.388 ] 00:14:55.388 }' 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:55.388 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=426 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.388 "name": "raid_bdev1", 00:14:55.388 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:14:55.388 "strip_size_kb": 0, 00:14:55.388 "state": "online", 00:14:55.388 "raid_level": "raid1", 00:14:55.388 "superblock": true, 00:14:55.388 "num_base_bdevs": 2, 00:14:55.388 "num_base_bdevs_discovered": 2, 00:14:55.388 "num_base_bdevs_operational": 2, 00:14:55.388 "process": { 00:14:55.388 "type": "rebuild", 00:14:55.388 "target": "spare", 00:14:55.388 "progress": { 00:14:55.388 "blocks": 10240, 00:14:55.388 "percent": 16 00:14:55.388 } 00:14:55.388 }, 00:14:55.388 "base_bdevs_list": [ 00:14:55.388 { 00:14:55.388 "name": "spare", 00:14:55.388 "uuid": "8ff21741-4e15-54c6-ad62-f8b164a1571f", 00:14:55.388 "is_configured": true, 00:14:55.388 "data_offset": 2048, 00:14:55.388 "data_size": 63488 00:14:55.388 }, 00:14:55.388 { 00:14:55.388 "name": "BaseBdev2", 00:14:55.388 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:14:55.388 "is_configured": true, 00:14:55.388 "data_offset": 2048, 00:14:55.388 "data_size": 63488 00:14:55.388 } 00:14:55.388 ] 00:14:55.388 }' 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.388 [2024-12-06 04:05:48.728568] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:55.388 [2024-12-06 04:05:48.729280] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.388 04:05:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:55.648 [2024-12-06 04:05:48.854252] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:55.908 139.75 IOPS, 419.25 MiB/s [2024-12-06T04:05:49.262Z] [2024-12-06 04:05:49.234967] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:56.479 [2024-12-06 04:05:49.727727] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:56.479 04:05:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.479 04:05:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.479 04:05:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.479 04:05:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.479 04:05:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.479 04:05:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.479 04:05:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.479 04:05:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.479 04:05:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.479 04:05:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.479 04:05:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.479 04:05:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.479 "name": "raid_bdev1", 00:14:56.479 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:14:56.479 "strip_size_kb": 0, 00:14:56.479 "state": "online", 00:14:56.479 "raid_level": "raid1", 00:14:56.479 "superblock": true, 00:14:56.479 "num_base_bdevs": 2, 00:14:56.479 "num_base_bdevs_discovered": 2, 00:14:56.479 "num_base_bdevs_operational": 2, 00:14:56.479 "process": { 00:14:56.479 "type": "rebuild", 00:14:56.479 "target": "spare", 00:14:56.479 "progress": { 00:14:56.479 "blocks": 28672, 00:14:56.479 "percent": 45 00:14:56.479 } 00:14:56.479 }, 00:14:56.479 "base_bdevs_list": [ 00:14:56.479 { 00:14:56.479 "name": "spare", 00:14:56.479 "uuid": "8ff21741-4e15-54c6-ad62-f8b164a1571f", 00:14:56.479 "is_configured": true, 00:14:56.479 "data_offset": 2048, 00:14:56.479 "data_size": 63488 00:14:56.479 }, 00:14:56.479 { 00:14:56.479 "name": "BaseBdev2", 00:14:56.479 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:14:56.479 "is_configured": true, 00:14:56.479 "data_offset": 2048, 00:14:56.479 "data_size": 63488 00:14:56.479 } 00:14:56.479 ] 00:14:56.479 }' 00:14:56.479 04:05:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.739 04:05:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.739 04:05:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.739 04:05:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.739 04:05:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:56.739 122.60 IOPS, 367.80 MiB/s [2024-12-06T04:05:50.093Z] [2024-12-06 04:05:50.063939] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:56.739 [2024-12-06 04:05:50.064710] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:57.317 [2024-12-06 04:05:50.523395] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:57.590 04:05:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:57.590 04:05:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:57.590 04:05:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.590 04:05:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:57.590 04:05:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:57.590 04:05:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.590 04:05:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.590 04:05:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.590 04:05:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.590 04:05:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.590 04:05:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.590 04:05:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.590 "name": "raid_bdev1", 00:14:57.590 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:14:57.590 "strip_size_kb": 0, 00:14:57.590 "state": "online", 00:14:57.590 "raid_level": "raid1", 00:14:57.590 "superblock": true, 00:14:57.590 "num_base_bdevs": 2, 00:14:57.590 "num_base_bdevs_discovered": 2, 00:14:57.590 "num_base_bdevs_operational": 2, 00:14:57.590 "process": { 00:14:57.590 "type": "rebuild", 00:14:57.590 "target": "spare", 00:14:57.590 "progress": { 00:14:57.590 "blocks": 43008, 00:14:57.590 "percent": 67 00:14:57.590 } 00:14:57.590 }, 00:14:57.590 "base_bdevs_list": [ 00:14:57.590 { 00:14:57.590 "name": "spare", 00:14:57.590 "uuid": "8ff21741-4e15-54c6-ad62-f8b164a1571f", 00:14:57.590 "is_configured": true, 00:14:57.590 "data_offset": 2048, 00:14:57.590 "data_size": 63488 00:14:57.590 }, 00:14:57.590 { 00:14:57.590 "name": "BaseBdev2", 00:14:57.590 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:14:57.590 "is_configured": true, 00:14:57.590 "data_offset": 2048, 00:14:57.590 "data_size": 63488 00:14:57.590 } 00:14:57.590 ] 00:14:57.590 }' 00:14:57.590 04:05:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.849 04:05:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:57.849 04:05:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.850 [2024-12-06 04:05:51.028561] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:57.850 04:05:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:57.850 04:05:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:58.109 109.83 IOPS, 329.50 MiB/s [2024-12-06T04:05:51.463Z] [2024-12-06 04:05:51.265354] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:58.368 [2024-12-06 04:05:51.707512] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:58.368 [2024-12-06 04:05:51.708252] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:58.937 04:05:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:58.937 04:05:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.937 04:05:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.937 04:05:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.937 04:05:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.937 04:05:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.937 99.14 IOPS, 297.43 MiB/s [2024-12-06T04:05:52.291Z] 04:05:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.937 04:05:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.937 04:05:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.938 04:05:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.938 04:05:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.938 04:05:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.938 "name": "raid_bdev1", 00:14:58.938 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:14:58.938 "strip_size_kb": 0, 00:14:58.938 "state": "online", 00:14:58.938 "raid_level": "raid1", 00:14:58.938 "superblock": true, 00:14:58.938 "num_base_bdevs": 2, 00:14:58.938 "num_base_bdevs_discovered": 2, 00:14:58.938 "num_base_bdevs_operational": 2, 00:14:58.938 "process": { 00:14:58.938 "type": "rebuild", 00:14:58.938 "target": "spare", 00:14:58.938 "progress": { 00:14:58.938 "blocks": 61440, 00:14:58.938 "percent": 96 00:14:58.938 } 00:14:58.938 }, 00:14:58.938 "base_bdevs_list": [ 00:14:58.938 { 00:14:58.938 "name": "spare", 00:14:58.938 "uuid": "8ff21741-4e15-54c6-ad62-f8b164a1571f", 00:14:58.938 "is_configured": true, 00:14:58.938 "data_offset": 2048, 00:14:58.938 "data_size": 63488 00:14:58.938 }, 00:14:58.938 { 00:14:58.938 "name": "BaseBdev2", 00:14:58.938 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:14:58.938 "is_configured": true, 00:14:58.938 "data_offset": 2048, 00:14:58.938 "data_size": 63488 00:14:58.938 } 00:14:58.938 ] 00:14:58.938 }' 00:14:58.938 04:05:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.938 04:05:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:58.938 04:05:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.938 [2024-12-06 04:05:52.148340] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:58.938 04:05:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:58.938 04:05:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:58.938 [2024-12-06 04:05:52.248169] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:58.938 [2024-12-06 04:05:52.250770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.901 91.75 IOPS, 275.25 MiB/s [2024-12-06T04:05:53.255Z] 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:59.901 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:59.901 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.901 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:59.901 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:59.901 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.901 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.901 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.901 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.901 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.901 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.160 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.160 "name": "raid_bdev1", 00:15:00.160 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:15:00.161 "strip_size_kb": 0, 00:15:00.161 "state": "online", 00:15:00.161 "raid_level": "raid1", 00:15:00.161 "superblock": true, 00:15:00.161 "num_base_bdevs": 2, 00:15:00.161 "num_base_bdevs_discovered": 2, 00:15:00.161 "num_base_bdevs_operational": 2, 00:15:00.161 "base_bdevs_list": [ 00:15:00.161 { 00:15:00.161 "name": "spare", 00:15:00.161 "uuid": "8ff21741-4e15-54c6-ad62-f8b164a1571f", 00:15:00.161 "is_configured": true, 00:15:00.161 "data_offset": 2048, 00:15:00.161 "data_size": 63488 00:15:00.161 }, 00:15:00.161 { 00:15:00.161 "name": "BaseBdev2", 00:15:00.161 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:15:00.161 "is_configured": true, 00:15:00.161 "data_offset": 2048, 00:15:00.161 "data_size": 63488 00:15:00.161 } 00:15:00.161 ] 00:15:00.161 }' 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.161 "name": "raid_bdev1", 00:15:00.161 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:15:00.161 "strip_size_kb": 0, 00:15:00.161 "state": "online", 00:15:00.161 "raid_level": "raid1", 00:15:00.161 "superblock": true, 00:15:00.161 "num_base_bdevs": 2, 00:15:00.161 "num_base_bdevs_discovered": 2, 00:15:00.161 "num_base_bdevs_operational": 2, 00:15:00.161 "base_bdevs_list": [ 00:15:00.161 { 00:15:00.161 "name": "spare", 00:15:00.161 "uuid": "8ff21741-4e15-54c6-ad62-f8b164a1571f", 00:15:00.161 "is_configured": true, 00:15:00.161 "data_offset": 2048, 00:15:00.161 "data_size": 63488 00:15:00.161 }, 00:15:00.161 { 00:15:00.161 "name": "BaseBdev2", 00:15:00.161 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:15:00.161 "is_configured": true, 00:15:00.161 "data_offset": 2048, 00:15:00.161 "data_size": 63488 00:15:00.161 } 00:15:00.161 ] 00:15:00.161 }' 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.161 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.418 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.418 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.418 "name": "raid_bdev1", 00:15:00.418 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:15:00.418 "strip_size_kb": 0, 00:15:00.418 "state": "online", 00:15:00.418 "raid_level": "raid1", 00:15:00.418 "superblock": true, 00:15:00.418 "num_base_bdevs": 2, 00:15:00.418 "num_base_bdevs_discovered": 2, 00:15:00.418 "num_base_bdevs_operational": 2, 00:15:00.418 "base_bdevs_list": [ 00:15:00.418 { 00:15:00.418 "name": "spare", 00:15:00.418 "uuid": "8ff21741-4e15-54c6-ad62-f8b164a1571f", 00:15:00.418 "is_configured": true, 00:15:00.418 "data_offset": 2048, 00:15:00.418 "data_size": 63488 00:15:00.418 }, 00:15:00.418 { 00:15:00.418 "name": "BaseBdev2", 00:15:00.418 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:15:00.418 "is_configured": true, 00:15:00.418 "data_offset": 2048, 00:15:00.418 "data_size": 63488 00:15:00.418 } 00:15:00.418 ] 00:15:00.418 }' 00:15:00.418 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.418 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.677 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:00.677 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.677 04:05:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.677 [2024-12-06 04:05:53.947754] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:00.677 [2024-12-06 04:05:53.947794] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:00.677 00:15:00.677 Latency(us) 00:15:00.677 [2024-12-06T04:05:54.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.677 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:00.677 raid_bdev1 : 8.99 85.45 256.36 0.00 0.00 16962.62 300.49 109894.43 00:15:00.677 [2024-12-06T04:05:54.031Z] =================================================================================================================== 00:15:00.677 [2024-12-06T04:05:54.031Z] Total : 85.45 256.36 0.00 0.00 16962.62 300.49 109894.43 00:15:00.935 { 00:15:00.935 "results": [ 00:15:00.935 { 00:15:00.935 "job": "raid_bdev1", 00:15:00.935 "core_mask": "0x1", 00:15:00.935 "workload": "randrw", 00:15:00.935 "percentage": 50, 00:15:00.935 "status": "finished", 00:15:00.935 "queue_depth": 2, 00:15:00.935 "io_size": 3145728, 00:15:00.935 "runtime": 8.987376, 00:15:00.935 "iops": 85.45319568247729, 00:15:00.935 "mibps": 256.3595870474319, 00:15:00.935 "io_failed": 0, 00:15:00.935 "io_timeout": 0, 00:15:00.935 "avg_latency_us": 16962.623580786025, 00:15:00.935 "min_latency_us": 300.49257641921395, 00:15:00.935 "max_latency_us": 109894.42794759825 00:15:00.935 } 00:15:00.935 ], 00:15:00.935 "core_count": 1 00:15:00.935 } 00:15:00.935 [2024-12-06 04:05:54.050396] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.935 [2024-12-06 04:05:54.050490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.935 [2024-12-06 04:05:54.050586] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:00.935 [2024-12-06 04:05:54.050598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:00.935 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.935 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:00.935 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.935 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.935 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.935 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.935 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:00.935 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:00.935 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:00.935 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:00.935 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:00.935 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:00.935 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:00.935 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:00.935 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:00.935 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:00.935 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:00.935 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:00.935 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:01.194 /dev/nbd0 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:01.194 1+0 records in 00:15:01.194 1+0 records out 00:15:01.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647877 s, 6.3 MB/s 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:01.194 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:15:01.453 /dev/nbd1 00:15:01.453 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:01.453 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:01.453 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:01.453 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:01.453 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:01.453 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:01.453 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:01.454 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:01.454 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:01.454 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:01.454 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:01.454 1+0 records in 00:15:01.454 1+0 records out 00:15:01.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030582 s, 13.4 MB/s 00:15:01.454 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.454 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:01.454 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.454 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:01.454 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:01.454 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:01.454 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:01.454 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:01.711 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:01.711 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:01.711 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:01.711 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:01.711 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:01.711 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:01.711 04:05:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:01.969 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:01.969 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:01.969 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:01.969 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:01.969 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:01.969 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:01.969 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:01.969 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:01.969 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:01.969 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:01.969 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:01.969 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:01.969 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:01.969 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:01.969 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.227 [2024-12-06 04:05:55.447196] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:02.227 [2024-12-06 04:05:55.447332] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.227 [2024-12-06 04:05:55.447382] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:02.227 [2024-12-06 04:05:55.447415] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.227 [2024-12-06 04:05:55.450012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.227 [2024-12-06 04:05:55.450128] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:02.227 [2024-12-06 04:05:55.450270] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:02.227 [2024-12-06 04:05:55.450344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:02.227 [2024-12-06 04:05:55.450534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.227 spare 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.227 [2024-12-06 04:05:55.550458] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:02.227 [2024-12-06 04:05:55.550620] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:02.227 [2024-12-06 04:05:55.551060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:15:02.227 [2024-12-06 04:05:55.551345] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:02.227 [2024-12-06 04:05:55.551393] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:02.227 [2024-12-06 04:05:55.551695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.227 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.485 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.485 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.485 "name": "raid_bdev1", 00:15:02.485 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:15:02.485 "strip_size_kb": 0, 00:15:02.485 "state": "online", 00:15:02.485 "raid_level": "raid1", 00:15:02.485 "superblock": true, 00:15:02.485 "num_base_bdevs": 2, 00:15:02.485 "num_base_bdevs_discovered": 2, 00:15:02.485 "num_base_bdevs_operational": 2, 00:15:02.485 "base_bdevs_list": [ 00:15:02.485 { 00:15:02.485 "name": "spare", 00:15:02.485 "uuid": "8ff21741-4e15-54c6-ad62-f8b164a1571f", 00:15:02.485 "is_configured": true, 00:15:02.485 "data_offset": 2048, 00:15:02.485 "data_size": 63488 00:15:02.485 }, 00:15:02.485 { 00:15:02.485 "name": "BaseBdev2", 00:15:02.485 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:15:02.485 "is_configured": true, 00:15:02.485 "data_offset": 2048, 00:15:02.485 "data_size": 63488 00:15:02.485 } 00:15:02.485 ] 00:15:02.485 }' 00:15:02.485 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.485 04:05:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.744 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:02.744 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.744 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:02.744 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:02.744 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.744 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.744 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.744 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.744 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.744 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.744 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.744 "name": "raid_bdev1", 00:15:02.744 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:15:02.744 "strip_size_kb": 0, 00:15:02.744 "state": "online", 00:15:02.744 "raid_level": "raid1", 00:15:02.744 "superblock": true, 00:15:02.744 "num_base_bdevs": 2, 00:15:02.744 "num_base_bdevs_discovered": 2, 00:15:02.744 "num_base_bdevs_operational": 2, 00:15:02.744 "base_bdevs_list": [ 00:15:02.744 { 00:15:02.744 "name": "spare", 00:15:02.744 "uuid": "8ff21741-4e15-54c6-ad62-f8b164a1571f", 00:15:02.744 "is_configured": true, 00:15:02.744 "data_offset": 2048, 00:15:02.744 "data_size": 63488 00:15:02.744 }, 00:15:02.744 { 00:15:02.744 "name": "BaseBdev2", 00:15:02.744 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:15:02.744 "is_configured": true, 00:15:02.744 "data_offset": 2048, 00:15:02.744 "data_size": 63488 00:15:02.744 } 00:15:02.744 ] 00:15:02.744 }' 00:15:02.744 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.002 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:03.002 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.002 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:03.002 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.002 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.002 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.003 [2024-12-06 04:05:56.202662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.003 "name": "raid_bdev1", 00:15:03.003 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:15:03.003 "strip_size_kb": 0, 00:15:03.003 "state": "online", 00:15:03.003 "raid_level": "raid1", 00:15:03.003 "superblock": true, 00:15:03.003 "num_base_bdevs": 2, 00:15:03.003 "num_base_bdevs_discovered": 1, 00:15:03.003 "num_base_bdevs_operational": 1, 00:15:03.003 "base_bdevs_list": [ 00:15:03.003 { 00:15:03.003 "name": null, 00:15:03.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.003 "is_configured": false, 00:15:03.003 "data_offset": 0, 00:15:03.003 "data_size": 63488 00:15:03.003 }, 00:15:03.003 { 00:15:03.003 "name": "BaseBdev2", 00:15:03.003 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:15:03.003 "is_configured": true, 00:15:03.003 "data_offset": 2048, 00:15:03.003 "data_size": 63488 00:15:03.003 } 00:15:03.003 ] 00:15:03.003 }' 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.003 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.570 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:03.570 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.570 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.570 [2024-12-06 04:05:56.701916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:03.570 [2024-12-06 04:05:56.702225] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:03.570 [2024-12-06 04:05:56.702303] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:03.570 [2024-12-06 04:05:56.702391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:03.570 [2024-12-06 04:05:56.720947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:15:03.570 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.570 04:05:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:03.570 [2024-12-06 04:05:56.723282] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:04.581 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.581 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.581 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.581 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.581 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.581 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.581 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.581 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.581 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.581 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.581 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.581 "name": "raid_bdev1", 00:15:04.581 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:15:04.581 "strip_size_kb": 0, 00:15:04.581 "state": "online", 00:15:04.581 "raid_level": "raid1", 00:15:04.581 "superblock": true, 00:15:04.581 "num_base_bdevs": 2, 00:15:04.581 "num_base_bdevs_discovered": 2, 00:15:04.581 "num_base_bdevs_operational": 2, 00:15:04.581 "process": { 00:15:04.581 "type": "rebuild", 00:15:04.581 "target": "spare", 00:15:04.581 "progress": { 00:15:04.581 "blocks": 20480, 00:15:04.581 "percent": 32 00:15:04.581 } 00:15:04.581 }, 00:15:04.581 "base_bdevs_list": [ 00:15:04.581 { 00:15:04.581 "name": "spare", 00:15:04.581 "uuid": "8ff21741-4e15-54c6-ad62-f8b164a1571f", 00:15:04.581 "is_configured": true, 00:15:04.581 "data_offset": 2048, 00:15:04.581 "data_size": 63488 00:15:04.581 }, 00:15:04.581 { 00:15:04.581 "name": "BaseBdev2", 00:15:04.581 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:15:04.581 "is_configured": true, 00:15:04.581 "data_offset": 2048, 00:15:04.581 "data_size": 63488 00:15:04.581 } 00:15:04.581 ] 00:15:04.581 }' 00:15:04.581 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.581 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.581 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.581 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.581 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:04.581 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.581 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.581 [2024-12-06 04:05:57.890526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:04.581 [2024-12-06 04:05:57.929525] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:04.581 [2024-12-06 04:05:57.929624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.581 [2024-12-06 04:05:57.929642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:04.581 [2024-12-06 04:05:57.929657] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:04.841 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.841 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:04.841 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.841 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.841 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.841 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.841 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:04.841 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.841 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.841 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.841 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.841 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.841 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.841 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.841 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.841 04:05:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.841 04:05:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.841 "name": "raid_bdev1", 00:15:04.841 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:15:04.841 "strip_size_kb": 0, 00:15:04.841 "state": "online", 00:15:04.841 "raid_level": "raid1", 00:15:04.841 "superblock": true, 00:15:04.841 "num_base_bdevs": 2, 00:15:04.841 "num_base_bdevs_discovered": 1, 00:15:04.841 "num_base_bdevs_operational": 1, 00:15:04.841 "base_bdevs_list": [ 00:15:04.841 { 00:15:04.841 "name": null, 00:15:04.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.841 "is_configured": false, 00:15:04.841 "data_offset": 0, 00:15:04.841 "data_size": 63488 00:15:04.841 }, 00:15:04.841 { 00:15:04.841 "name": "BaseBdev2", 00:15:04.841 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:15:04.841 "is_configured": true, 00:15:04.841 "data_offset": 2048, 00:15:04.841 "data_size": 63488 00:15:04.841 } 00:15:04.841 ] 00:15:04.841 }' 00:15:04.841 04:05:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.841 04:05:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.444 04:05:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:05.444 04:05:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.444 04:05:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.444 [2024-12-06 04:05:58.474236] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:05.444 [2024-12-06 04:05:58.474395] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.444 [2024-12-06 04:05:58.474424] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:05.444 [2024-12-06 04:05:58.474436] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.444 [2024-12-06 04:05:58.474971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.444 [2024-12-06 04:05:58.475005] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:05.444 [2024-12-06 04:05:58.475124] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:05.444 [2024-12-06 04:05:58.475142] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:05.444 [2024-12-06 04:05:58.475153] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:05.444 [2024-12-06 04:05:58.475184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:05.444 [2024-12-06 04:05:58.492252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:15:05.444 spare 00:15:05.444 04:05:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.444 04:05:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:05.444 [2024-12-06 04:05:58.494508] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:06.382 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.382 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.383 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.383 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.383 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.383 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.383 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.383 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.383 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.383 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.383 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.383 "name": "raid_bdev1", 00:15:06.383 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:15:06.383 "strip_size_kb": 0, 00:15:06.383 "state": "online", 00:15:06.383 "raid_level": "raid1", 00:15:06.383 "superblock": true, 00:15:06.383 "num_base_bdevs": 2, 00:15:06.383 "num_base_bdevs_discovered": 2, 00:15:06.383 "num_base_bdevs_operational": 2, 00:15:06.383 "process": { 00:15:06.383 "type": "rebuild", 00:15:06.383 "target": "spare", 00:15:06.383 "progress": { 00:15:06.383 "blocks": 20480, 00:15:06.383 "percent": 32 00:15:06.383 } 00:15:06.383 }, 00:15:06.383 "base_bdevs_list": [ 00:15:06.383 { 00:15:06.383 "name": "spare", 00:15:06.383 "uuid": "8ff21741-4e15-54c6-ad62-f8b164a1571f", 00:15:06.383 "is_configured": true, 00:15:06.383 "data_offset": 2048, 00:15:06.383 "data_size": 63488 00:15:06.383 }, 00:15:06.383 { 00:15:06.383 "name": "BaseBdev2", 00:15:06.383 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:15:06.383 "is_configured": true, 00:15:06.383 "data_offset": 2048, 00:15:06.383 "data_size": 63488 00:15:06.383 } 00:15:06.383 ] 00:15:06.383 }' 00:15:06.383 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.383 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.383 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.383 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.383 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:06.383 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.383 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.383 [2024-12-06 04:05:59.634348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:06.383 [2024-12-06 04:05:59.700657] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:06.383 [2024-12-06 04:05:59.700751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.383 [2024-12-06 04:05:59.700775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:06.383 [2024-12-06 04:05:59.700783] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:06.642 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.642 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:06.642 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.642 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.642 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.642 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.643 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:06.643 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.643 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.643 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.643 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.643 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.643 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.643 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.643 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.643 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.643 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.643 "name": "raid_bdev1", 00:15:06.643 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:15:06.643 "strip_size_kb": 0, 00:15:06.643 "state": "online", 00:15:06.643 "raid_level": "raid1", 00:15:06.643 "superblock": true, 00:15:06.643 "num_base_bdevs": 2, 00:15:06.643 "num_base_bdevs_discovered": 1, 00:15:06.643 "num_base_bdevs_operational": 1, 00:15:06.643 "base_bdevs_list": [ 00:15:06.643 { 00:15:06.643 "name": null, 00:15:06.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.643 "is_configured": false, 00:15:06.643 "data_offset": 0, 00:15:06.643 "data_size": 63488 00:15:06.643 }, 00:15:06.643 { 00:15:06.643 "name": "BaseBdev2", 00:15:06.643 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:15:06.643 "is_configured": true, 00:15:06.643 "data_offset": 2048, 00:15:06.643 "data_size": 63488 00:15:06.643 } 00:15:06.643 ] 00:15:06.643 }' 00:15:06.643 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.643 04:05:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.903 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:06.903 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.903 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:06.903 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:06.903 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.903 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.903 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.903 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.903 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.903 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.903 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.903 "name": "raid_bdev1", 00:15:06.903 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:15:06.903 "strip_size_kb": 0, 00:15:06.903 "state": "online", 00:15:06.903 "raid_level": "raid1", 00:15:06.903 "superblock": true, 00:15:06.903 "num_base_bdevs": 2, 00:15:06.903 "num_base_bdevs_discovered": 1, 00:15:06.903 "num_base_bdevs_operational": 1, 00:15:06.903 "base_bdevs_list": [ 00:15:06.903 { 00:15:06.903 "name": null, 00:15:06.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.903 "is_configured": false, 00:15:06.903 "data_offset": 0, 00:15:06.903 "data_size": 63488 00:15:06.903 }, 00:15:06.903 { 00:15:06.903 "name": "BaseBdev2", 00:15:06.903 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:15:06.903 "is_configured": true, 00:15:06.903 "data_offset": 2048, 00:15:06.903 "data_size": 63488 00:15:06.903 } 00:15:06.903 ] 00:15:06.903 }' 00:15:06.903 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.163 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:07.163 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.163 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:07.163 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:07.163 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.163 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.163 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.163 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:07.163 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.163 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.163 [2024-12-06 04:06:00.326220] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:07.163 [2024-12-06 04:06:00.326309] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.163 [2024-12-06 04:06:00.326353] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:07.163 [2024-12-06 04:06:00.326366] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.163 [2024-12-06 04:06:00.326894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.163 [2024-12-06 04:06:00.326921] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:07.163 [2024-12-06 04:06:00.327017] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:07.163 [2024-12-06 04:06:00.327034] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:07.163 [2024-12-06 04:06:00.327048] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:07.163 [2024-12-06 04:06:00.327060] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:07.163 BaseBdev1 00:15:07.163 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.163 04:06:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:08.103 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:08.103 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.103 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.103 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.103 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.103 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:08.103 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.103 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.103 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.103 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.103 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.103 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.103 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.103 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.103 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.103 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.103 "name": "raid_bdev1", 00:15:08.103 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:15:08.103 "strip_size_kb": 0, 00:15:08.103 "state": "online", 00:15:08.103 "raid_level": "raid1", 00:15:08.103 "superblock": true, 00:15:08.103 "num_base_bdevs": 2, 00:15:08.103 "num_base_bdevs_discovered": 1, 00:15:08.103 "num_base_bdevs_operational": 1, 00:15:08.103 "base_bdevs_list": [ 00:15:08.103 { 00:15:08.103 "name": null, 00:15:08.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.103 "is_configured": false, 00:15:08.103 "data_offset": 0, 00:15:08.103 "data_size": 63488 00:15:08.103 }, 00:15:08.103 { 00:15:08.103 "name": "BaseBdev2", 00:15:08.103 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:15:08.103 "is_configured": true, 00:15:08.103 "data_offset": 2048, 00:15:08.103 "data_size": 63488 00:15:08.103 } 00:15:08.103 ] 00:15:08.103 }' 00:15:08.103 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.103 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.673 "name": "raid_bdev1", 00:15:08.673 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:15:08.673 "strip_size_kb": 0, 00:15:08.673 "state": "online", 00:15:08.673 "raid_level": "raid1", 00:15:08.673 "superblock": true, 00:15:08.673 "num_base_bdevs": 2, 00:15:08.673 "num_base_bdevs_discovered": 1, 00:15:08.673 "num_base_bdevs_operational": 1, 00:15:08.673 "base_bdevs_list": [ 00:15:08.673 { 00:15:08.673 "name": null, 00:15:08.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.673 "is_configured": false, 00:15:08.673 "data_offset": 0, 00:15:08.673 "data_size": 63488 00:15:08.673 }, 00:15:08.673 { 00:15:08.673 "name": "BaseBdev2", 00:15:08.673 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:15:08.673 "is_configured": true, 00:15:08.673 "data_offset": 2048, 00:15:08.673 "data_size": 63488 00:15:08.673 } 00:15:08.673 ] 00:15:08.673 }' 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.673 [2024-12-06 04:06:01.943857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:08.673 [2024-12-06 04:06:01.944060] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:08.673 [2024-12-06 04:06:01.944080] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:08.673 request: 00:15:08.673 { 00:15:08.673 "base_bdev": "BaseBdev1", 00:15:08.673 "raid_bdev": "raid_bdev1", 00:15:08.673 "method": "bdev_raid_add_base_bdev", 00:15:08.673 "req_id": 1 00:15:08.673 } 00:15:08.673 Got JSON-RPC error response 00:15:08.673 response: 00:15:08.673 { 00:15:08.673 "code": -22, 00:15:08.673 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:08.673 } 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:08.673 04:06:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:09.611 04:06:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:09.611 04:06:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.611 04:06:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.611 04:06:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.611 04:06:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.611 04:06:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:09.611 04:06:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.611 04:06:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.611 04:06:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.611 04:06:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.611 04:06:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.611 04:06:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.870 04:06:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.870 04:06:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.870 04:06:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.871 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.871 "name": "raid_bdev1", 00:15:09.871 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:15:09.871 "strip_size_kb": 0, 00:15:09.871 "state": "online", 00:15:09.871 "raid_level": "raid1", 00:15:09.871 "superblock": true, 00:15:09.871 "num_base_bdevs": 2, 00:15:09.871 "num_base_bdevs_discovered": 1, 00:15:09.871 "num_base_bdevs_operational": 1, 00:15:09.871 "base_bdevs_list": [ 00:15:09.871 { 00:15:09.871 "name": null, 00:15:09.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.871 "is_configured": false, 00:15:09.871 "data_offset": 0, 00:15:09.871 "data_size": 63488 00:15:09.871 }, 00:15:09.871 { 00:15:09.871 "name": "BaseBdev2", 00:15:09.871 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:15:09.871 "is_configured": true, 00:15:09.871 "data_offset": 2048, 00:15:09.871 "data_size": 63488 00:15:09.871 } 00:15:09.871 ] 00:15:09.871 }' 00:15:09.871 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.871 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.129 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:10.129 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.129 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:10.129 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:10.129 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.129 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.129 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.129 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.129 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.129 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.129 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.129 "name": "raid_bdev1", 00:15:10.129 "uuid": "83d98405-c950-40c0-b352-2f4644181e12", 00:15:10.129 "strip_size_kb": 0, 00:15:10.129 "state": "online", 00:15:10.129 "raid_level": "raid1", 00:15:10.129 "superblock": true, 00:15:10.129 "num_base_bdevs": 2, 00:15:10.129 "num_base_bdevs_discovered": 1, 00:15:10.129 "num_base_bdevs_operational": 1, 00:15:10.129 "base_bdevs_list": [ 00:15:10.129 { 00:15:10.129 "name": null, 00:15:10.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.129 "is_configured": false, 00:15:10.129 "data_offset": 0, 00:15:10.129 "data_size": 63488 00:15:10.129 }, 00:15:10.129 { 00:15:10.129 "name": "BaseBdev2", 00:15:10.129 "uuid": "0ca42ad7-e8e3-58b1-bfa2-20a45af37030", 00:15:10.129 "is_configured": true, 00:15:10.129 "data_offset": 2048, 00:15:10.129 "data_size": 63488 00:15:10.129 } 00:15:10.129 ] 00:15:10.129 }' 00:15:10.129 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.388 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:10.388 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.388 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:10.388 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77038 00:15:10.388 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77038 ']' 00:15:10.388 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77038 00:15:10.388 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:10.388 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:10.388 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77038 00:15:10.388 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:10.388 killing process with pid 77038 00:15:10.388 Received shutdown signal, test time was about 18.559263 seconds 00:15:10.388 00:15:10.388 Latency(us) 00:15:10.388 [2024-12-06T04:06:03.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.388 [2024-12-06T04:06:03.742Z] =================================================================================================================== 00:15:10.388 [2024-12-06T04:06:03.742Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:10.388 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:10.388 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77038' 00:15:10.388 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77038 00:15:10.388 [2024-12-06 04:06:03.577030] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:10.388 [2024-12-06 04:06:03.577217] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:10.388 04:06:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77038 00:15:10.388 [2024-12-06 04:06:03.577308] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:10.388 [2024-12-06 04:06:03.577324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:10.646 [2024-12-06 04:06:03.856870] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:12.057 04:06:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:12.057 00:15:12.057 real 0m22.061s 00:15:12.057 user 0m28.561s 00:15:12.057 sys 0m2.361s 00:15:12.057 04:06:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.057 ************************************ 00:15:12.058 END TEST raid_rebuild_test_sb_io 00:15:12.058 ************************************ 00:15:12.058 04:06:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.058 04:06:05 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:15:12.058 04:06:05 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:15:12.058 04:06:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:12.058 04:06:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.058 04:06:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:12.058 ************************************ 00:15:12.058 START TEST raid_rebuild_test 00:15:12.058 ************************************ 00:15:12.058 04:06:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:15:12.058 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:12.058 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:12.058 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:12.058 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:12.058 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:12.058 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:12.058 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:12.058 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:12.058 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:12.058 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:12.058 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:12.058 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:12.058 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:12.058 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:12.058 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:12.058 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:12.058 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:12.058 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:12.059 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:12.059 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:12.059 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:12.059 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:12.059 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:12.059 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:12.059 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:12.059 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:12.059 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:12.059 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:12.059 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:12.059 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77758 00:15:12.059 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77758 00:15:12.059 04:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:12.059 04:06:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77758 ']' 00:15:12.059 04:06:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.059 04:06:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.059 04:06:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.059 04:06:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.059 04:06:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.323 [2024-12-06 04:06:05.424331] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:15:12.323 [2024-12-06 04:06:05.424566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:12.323 Zero copy mechanism will not be used. 00:15:12.323 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77758 ] 00:15:12.323 [2024-12-06 04:06:05.621977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.581 [2024-12-06 04:06:05.758966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.840 [2024-12-06 04:06:06.000677] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:12.840 [2024-12-06 04:06:06.000733] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.099 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:13.099 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:13.099 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:13.099 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:13.099 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.099 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.099 BaseBdev1_malloc 00:15:13.099 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.099 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:13.099 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.099 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.099 [2024-12-06 04:06:06.405413] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:13.099 [2024-12-06 04:06:06.405514] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.099 [2024-12-06 04:06:06.405553] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:13.099 [2024-12-06 04:06:06.405572] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.099 [2024-12-06 04:06:06.408250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.099 [2024-12-06 04:06:06.408313] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:13.099 BaseBdev1 00:15:13.099 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.099 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:13.099 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:13.099 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.099 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.359 BaseBdev2_malloc 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.359 [2024-12-06 04:06:06.467843] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:13.359 [2024-12-06 04:06:06.467950] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.359 [2024-12-06 04:06:06.467994] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:13.359 [2024-12-06 04:06:06.468012] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.359 [2024-12-06 04:06:06.470746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.359 [2024-12-06 04:06:06.470897] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:13.359 BaseBdev2 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.359 BaseBdev3_malloc 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.359 [2024-12-06 04:06:06.541873] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:13.359 [2024-12-06 04:06:06.542092] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.359 [2024-12-06 04:06:06.542141] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:13.359 [2024-12-06 04:06:06.542160] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.359 [2024-12-06 04:06:06.544872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.359 [2024-12-06 04:06:06.544947] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:13.359 BaseBdev3 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.359 BaseBdev4_malloc 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.359 [2024-12-06 04:06:06.602796] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:13.359 [2024-12-06 04:06:06.602905] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.359 [2024-12-06 04:06:06.602944] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:13.359 [2024-12-06 04:06:06.602961] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.359 [2024-12-06 04:06:06.605655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.359 [2024-12-06 04:06:06.605729] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:13.359 BaseBdev4 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.359 spare_malloc 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.359 spare_delay 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.359 [2024-12-06 04:06:06.675938] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:13.359 [2024-12-06 04:06:06.676034] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.359 [2024-12-06 04:06:06.676098] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:13.359 [2024-12-06 04:06:06.676117] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.359 [2024-12-06 04:06:06.678753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.359 [2024-12-06 04:06:06.678817] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:13.359 spare 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.359 [2024-12-06 04:06:06.687973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.359 [2024-12-06 04:06:06.690207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.359 [2024-12-06 04:06:06.690287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:13.359 [2024-12-06 04:06:06.690349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:13.359 [2024-12-06 04:06:06.690455] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:13.359 [2024-12-06 04:06:06.690471] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:13.359 [2024-12-06 04:06:06.690798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:13.359 [2024-12-06 04:06:06.691003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:13.359 [2024-12-06 04:06:06.691017] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:13.359 [2024-12-06 04:06:06.691250] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.359 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.619 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.619 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.619 "name": "raid_bdev1", 00:15:13.619 "uuid": "b9b81a55-9934-45c5-974d-a63a96924d0b", 00:15:13.619 "strip_size_kb": 0, 00:15:13.619 "state": "online", 00:15:13.619 "raid_level": "raid1", 00:15:13.619 "superblock": false, 00:15:13.619 "num_base_bdevs": 4, 00:15:13.619 "num_base_bdevs_discovered": 4, 00:15:13.619 "num_base_bdevs_operational": 4, 00:15:13.619 "base_bdevs_list": [ 00:15:13.619 { 00:15:13.619 "name": "BaseBdev1", 00:15:13.619 "uuid": "4d0214ea-c63f-53ea-a1a1-a5342b2e6ec7", 00:15:13.619 "is_configured": true, 00:15:13.619 "data_offset": 0, 00:15:13.619 "data_size": 65536 00:15:13.619 }, 00:15:13.619 { 00:15:13.619 "name": "BaseBdev2", 00:15:13.619 "uuid": "3dae52aa-ac5e-5077-a267-5fc5ad43b9e2", 00:15:13.619 "is_configured": true, 00:15:13.619 "data_offset": 0, 00:15:13.619 "data_size": 65536 00:15:13.619 }, 00:15:13.619 { 00:15:13.619 "name": "BaseBdev3", 00:15:13.619 "uuid": "edfbfd25-6f65-5574-ab7a-1bf582a012f3", 00:15:13.619 "is_configured": true, 00:15:13.619 "data_offset": 0, 00:15:13.619 "data_size": 65536 00:15:13.619 }, 00:15:13.619 { 00:15:13.619 "name": "BaseBdev4", 00:15:13.619 "uuid": "d8fefda8-6a6f-58e0-b18b-61a53bbf1a06", 00:15:13.619 "is_configured": true, 00:15:13.619 "data_offset": 0, 00:15:13.619 "data_size": 65536 00:15:13.619 } 00:15:13.619 ] 00:15:13.619 }' 00:15:13.619 04:06:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.619 04:06:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.879 [2024-12-06 04:06:07.151639] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:13.879 04:06:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:14.139 [2024-12-06 04:06:07.442785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:14.139 /dev/nbd0 00:15:14.139 04:06:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:14.139 04:06:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:14.139 04:06:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:14.139 04:06:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:14.139 04:06:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:14.139 04:06:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:14.139 04:06:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:14.139 04:06:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:14.139 04:06:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:14.139 04:06:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:14.139 04:06:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.139 1+0 records in 00:15:14.139 1+0 records out 00:15:14.139 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453721 s, 9.0 MB/s 00:15:14.139 04:06:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.139 04:06:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:14.139 04:06:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.397 04:06:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:14.397 04:06:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:14.397 04:06:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.397 04:06:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:14.397 04:06:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:14.397 04:06:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:14.397 04:06:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:21.005 65536+0 records in 00:15:21.005 65536+0 records out 00:15:21.005 33554432 bytes (34 MB, 32 MiB) copied, 6.80962 s, 4.9 MB/s 00:15:21.005 04:06:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:21.005 04:06:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:21.005 04:06:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:21.005 04:06:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:21.005 04:06:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:21.005 04:06:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.005 04:06:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:21.265 [2024-12-06 04:06:14.535667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.265 [2024-12-06 04:06:14.547766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.265 "name": "raid_bdev1", 00:15:21.265 "uuid": "b9b81a55-9934-45c5-974d-a63a96924d0b", 00:15:21.265 "strip_size_kb": 0, 00:15:21.265 "state": "online", 00:15:21.265 "raid_level": "raid1", 00:15:21.265 "superblock": false, 00:15:21.265 "num_base_bdevs": 4, 00:15:21.265 "num_base_bdevs_discovered": 3, 00:15:21.265 "num_base_bdevs_operational": 3, 00:15:21.265 "base_bdevs_list": [ 00:15:21.265 { 00:15:21.265 "name": null, 00:15:21.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.265 "is_configured": false, 00:15:21.265 "data_offset": 0, 00:15:21.265 "data_size": 65536 00:15:21.265 }, 00:15:21.265 { 00:15:21.265 "name": "BaseBdev2", 00:15:21.265 "uuid": "3dae52aa-ac5e-5077-a267-5fc5ad43b9e2", 00:15:21.265 "is_configured": true, 00:15:21.265 "data_offset": 0, 00:15:21.265 "data_size": 65536 00:15:21.265 }, 00:15:21.265 { 00:15:21.265 "name": "BaseBdev3", 00:15:21.265 "uuid": "edfbfd25-6f65-5574-ab7a-1bf582a012f3", 00:15:21.265 "is_configured": true, 00:15:21.265 "data_offset": 0, 00:15:21.265 "data_size": 65536 00:15:21.265 }, 00:15:21.265 { 00:15:21.265 "name": "BaseBdev4", 00:15:21.265 "uuid": "d8fefda8-6a6f-58e0-b18b-61a53bbf1a06", 00:15:21.265 "is_configured": true, 00:15:21.265 "data_offset": 0, 00:15:21.265 "data_size": 65536 00:15:21.265 } 00:15:21.265 ] 00:15:21.265 }' 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.265 04:06:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.836 04:06:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:21.836 04:06:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.836 04:06:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.836 [2024-12-06 04:06:15.027016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:21.836 [2024-12-06 04:06:15.044903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:15:21.836 04:06:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.836 04:06:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:21.836 [2024-12-06 04:06:15.047145] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:22.770 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.770 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.770 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:22.770 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:22.770 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.770 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.770 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.770 04:06:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.770 04:06:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.770 04:06:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.770 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.770 "name": "raid_bdev1", 00:15:22.770 "uuid": "b9b81a55-9934-45c5-974d-a63a96924d0b", 00:15:22.770 "strip_size_kb": 0, 00:15:22.770 "state": "online", 00:15:22.770 "raid_level": "raid1", 00:15:22.770 "superblock": false, 00:15:22.770 "num_base_bdevs": 4, 00:15:22.770 "num_base_bdevs_discovered": 4, 00:15:22.770 "num_base_bdevs_operational": 4, 00:15:22.770 "process": { 00:15:22.770 "type": "rebuild", 00:15:22.770 "target": "spare", 00:15:22.770 "progress": { 00:15:22.770 "blocks": 20480, 00:15:22.770 "percent": 31 00:15:22.770 } 00:15:22.770 }, 00:15:22.770 "base_bdevs_list": [ 00:15:22.770 { 00:15:22.770 "name": "spare", 00:15:22.770 "uuid": "a26af65b-1d5f-5983-9bd4-a5063d6fb4bc", 00:15:22.770 "is_configured": true, 00:15:22.770 "data_offset": 0, 00:15:22.770 "data_size": 65536 00:15:22.770 }, 00:15:22.770 { 00:15:22.770 "name": "BaseBdev2", 00:15:22.770 "uuid": "3dae52aa-ac5e-5077-a267-5fc5ad43b9e2", 00:15:22.770 "is_configured": true, 00:15:22.770 "data_offset": 0, 00:15:22.770 "data_size": 65536 00:15:22.770 }, 00:15:22.770 { 00:15:22.770 "name": "BaseBdev3", 00:15:22.770 "uuid": "edfbfd25-6f65-5574-ab7a-1bf582a012f3", 00:15:22.770 "is_configured": true, 00:15:22.770 "data_offset": 0, 00:15:22.770 "data_size": 65536 00:15:22.770 }, 00:15:22.770 { 00:15:22.770 "name": "BaseBdev4", 00:15:22.770 "uuid": "d8fefda8-6a6f-58e0-b18b-61a53bbf1a06", 00:15:22.770 "is_configured": true, 00:15:22.770 "data_offset": 0, 00:15:22.770 "data_size": 65536 00:15:22.770 } 00:15:22.770 ] 00:15:22.770 }' 00:15:22.770 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.029 [2024-12-06 04:06:16.210624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:23.029 [2024-12-06 04:06:16.253689] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:23.029 [2024-12-06 04:06:16.253787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.029 [2024-12-06 04:06:16.253810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:23.029 [2024-12-06 04:06:16.253822] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.029 "name": "raid_bdev1", 00:15:23.029 "uuid": "b9b81a55-9934-45c5-974d-a63a96924d0b", 00:15:23.029 "strip_size_kb": 0, 00:15:23.029 "state": "online", 00:15:23.029 "raid_level": "raid1", 00:15:23.029 "superblock": false, 00:15:23.029 "num_base_bdevs": 4, 00:15:23.029 "num_base_bdevs_discovered": 3, 00:15:23.029 "num_base_bdevs_operational": 3, 00:15:23.029 "base_bdevs_list": [ 00:15:23.029 { 00:15:23.029 "name": null, 00:15:23.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.029 "is_configured": false, 00:15:23.029 "data_offset": 0, 00:15:23.029 "data_size": 65536 00:15:23.029 }, 00:15:23.029 { 00:15:23.029 "name": "BaseBdev2", 00:15:23.029 "uuid": "3dae52aa-ac5e-5077-a267-5fc5ad43b9e2", 00:15:23.029 "is_configured": true, 00:15:23.029 "data_offset": 0, 00:15:23.029 "data_size": 65536 00:15:23.029 }, 00:15:23.029 { 00:15:23.029 "name": "BaseBdev3", 00:15:23.029 "uuid": "edfbfd25-6f65-5574-ab7a-1bf582a012f3", 00:15:23.029 "is_configured": true, 00:15:23.029 "data_offset": 0, 00:15:23.029 "data_size": 65536 00:15:23.029 }, 00:15:23.029 { 00:15:23.029 "name": "BaseBdev4", 00:15:23.029 "uuid": "d8fefda8-6a6f-58e0-b18b-61a53bbf1a06", 00:15:23.029 "is_configured": true, 00:15:23.029 "data_offset": 0, 00:15:23.029 "data_size": 65536 00:15:23.029 } 00:15:23.029 ] 00:15:23.029 }' 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.029 04:06:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.594 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:23.594 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.594 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:23.594 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:23.594 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.594 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.595 04:06:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.595 04:06:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.595 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.595 04:06:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.595 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.595 "name": "raid_bdev1", 00:15:23.595 "uuid": "b9b81a55-9934-45c5-974d-a63a96924d0b", 00:15:23.595 "strip_size_kb": 0, 00:15:23.595 "state": "online", 00:15:23.595 "raid_level": "raid1", 00:15:23.595 "superblock": false, 00:15:23.595 "num_base_bdevs": 4, 00:15:23.595 "num_base_bdevs_discovered": 3, 00:15:23.595 "num_base_bdevs_operational": 3, 00:15:23.595 "base_bdevs_list": [ 00:15:23.595 { 00:15:23.595 "name": null, 00:15:23.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.595 "is_configured": false, 00:15:23.595 "data_offset": 0, 00:15:23.595 "data_size": 65536 00:15:23.595 }, 00:15:23.595 { 00:15:23.595 "name": "BaseBdev2", 00:15:23.595 "uuid": "3dae52aa-ac5e-5077-a267-5fc5ad43b9e2", 00:15:23.595 "is_configured": true, 00:15:23.595 "data_offset": 0, 00:15:23.595 "data_size": 65536 00:15:23.595 }, 00:15:23.595 { 00:15:23.595 "name": "BaseBdev3", 00:15:23.595 "uuid": "edfbfd25-6f65-5574-ab7a-1bf582a012f3", 00:15:23.595 "is_configured": true, 00:15:23.595 "data_offset": 0, 00:15:23.595 "data_size": 65536 00:15:23.595 }, 00:15:23.595 { 00:15:23.595 "name": "BaseBdev4", 00:15:23.595 "uuid": "d8fefda8-6a6f-58e0-b18b-61a53bbf1a06", 00:15:23.595 "is_configured": true, 00:15:23.595 "data_offset": 0, 00:15:23.595 "data_size": 65536 00:15:23.595 } 00:15:23.595 ] 00:15:23.595 }' 00:15:23.595 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.595 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:23.595 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.595 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:23.595 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:23.595 04:06:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.595 04:06:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.595 [2024-12-06 04:06:16.887848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:23.595 [2024-12-06 04:06:16.904971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:15:23.595 04:06:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.595 04:06:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:23.595 [2024-12-06 04:06:16.907167] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:24.972 04:06:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.972 04:06:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.972 04:06:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.972 04:06:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.972 04:06:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.972 04:06:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.972 04:06:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.972 04:06:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.972 04:06:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.972 04:06:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.972 04:06:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.972 "name": "raid_bdev1", 00:15:24.972 "uuid": "b9b81a55-9934-45c5-974d-a63a96924d0b", 00:15:24.972 "strip_size_kb": 0, 00:15:24.972 "state": "online", 00:15:24.972 "raid_level": "raid1", 00:15:24.972 "superblock": false, 00:15:24.972 "num_base_bdevs": 4, 00:15:24.972 "num_base_bdevs_discovered": 4, 00:15:24.972 "num_base_bdevs_operational": 4, 00:15:24.972 "process": { 00:15:24.972 "type": "rebuild", 00:15:24.972 "target": "spare", 00:15:24.972 "progress": { 00:15:24.972 "blocks": 20480, 00:15:24.972 "percent": 31 00:15:24.972 } 00:15:24.972 }, 00:15:24.972 "base_bdevs_list": [ 00:15:24.972 { 00:15:24.972 "name": "spare", 00:15:24.972 "uuid": "a26af65b-1d5f-5983-9bd4-a5063d6fb4bc", 00:15:24.972 "is_configured": true, 00:15:24.972 "data_offset": 0, 00:15:24.972 "data_size": 65536 00:15:24.972 }, 00:15:24.972 { 00:15:24.972 "name": "BaseBdev2", 00:15:24.972 "uuid": "3dae52aa-ac5e-5077-a267-5fc5ad43b9e2", 00:15:24.972 "is_configured": true, 00:15:24.972 "data_offset": 0, 00:15:24.972 "data_size": 65536 00:15:24.972 }, 00:15:24.972 { 00:15:24.972 "name": "BaseBdev3", 00:15:24.972 "uuid": "edfbfd25-6f65-5574-ab7a-1bf582a012f3", 00:15:24.972 "is_configured": true, 00:15:24.972 "data_offset": 0, 00:15:24.972 "data_size": 65536 00:15:24.972 }, 00:15:24.972 { 00:15:24.972 "name": "BaseBdev4", 00:15:24.972 "uuid": "d8fefda8-6a6f-58e0-b18b-61a53bbf1a06", 00:15:24.972 "is_configured": true, 00:15:24.972 "data_offset": 0, 00:15:24.972 "data_size": 65536 00:15:24.972 } 00:15:24.972 ] 00:15:24.972 }' 00:15:24.972 04:06:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.972 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.972 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.972 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.972 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:24.972 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:24.972 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:24.972 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:24.972 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.973 [2024-12-06 04:06:18.066230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:24.973 [2024-12-06 04:06:18.113232] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.973 "name": "raid_bdev1", 00:15:24.973 "uuid": "b9b81a55-9934-45c5-974d-a63a96924d0b", 00:15:24.973 "strip_size_kb": 0, 00:15:24.973 "state": "online", 00:15:24.973 "raid_level": "raid1", 00:15:24.973 "superblock": false, 00:15:24.973 "num_base_bdevs": 4, 00:15:24.973 "num_base_bdevs_discovered": 3, 00:15:24.973 "num_base_bdevs_operational": 3, 00:15:24.973 "process": { 00:15:24.973 "type": "rebuild", 00:15:24.973 "target": "spare", 00:15:24.973 "progress": { 00:15:24.973 "blocks": 24576, 00:15:24.973 "percent": 37 00:15:24.973 } 00:15:24.973 }, 00:15:24.973 "base_bdevs_list": [ 00:15:24.973 { 00:15:24.973 "name": "spare", 00:15:24.973 "uuid": "a26af65b-1d5f-5983-9bd4-a5063d6fb4bc", 00:15:24.973 "is_configured": true, 00:15:24.973 "data_offset": 0, 00:15:24.973 "data_size": 65536 00:15:24.973 }, 00:15:24.973 { 00:15:24.973 "name": null, 00:15:24.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.973 "is_configured": false, 00:15:24.973 "data_offset": 0, 00:15:24.973 "data_size": 65536 00:15:24.973 }, 00:15:24.973 { 00:15:24.973 "name": "BaseBdev3", 00:15:24.973 "uuid": "edfbfd25-6f65-5574-ab7a-1bf582a012f3", 00:15:24.973 "is_configured": true, 00:15:24.973 "data_offset": 0, 00:15:24.973 "data_size": 65536 00:15:24.973 }, 00:15:24.973 { 00:15:24.973 "name": "BaseBdev4", 00:15:24.973 "uuid": "d8fefda8-6a6f-58e0-b18b-61a53bbf1a06", 00:15:24.973 "is_configured": true, 00:15:24.973 "data_offset": 0, 00:15:24.973 "data_size": 65536 00:15:24.973 } 00:15:24.973 ] 00:15:24.973 }' 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=456 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.973 "name": "raid_bdev1", 00:15:24.973 "uuid": "b9b81a55-9934-45c5-974d-a63a96924d0b", 00:15:24.973 "strip_size_kb": 0, 00:15:24.973 "state": "online", 00:15:24.973 "raid_level": "raid1", 00:15:24.973 "superblock": false, 00:15:24.973 "num_base_bdevs": 4, 00:15:24.973 "num_base_bdevs_discovered": 3, 00:15:24.973 "num_base_bdevs_operational": 3, 00:15:24.973 "process": { 00:15:24.973 "type": "rebuild", 00:15:24.973 "target": "spare", 00:15:24.973 "progress": { 00:15:24.973 "blocks": 26624, 00:15:24.973 "percent": 40 00:15:24.973 } 00:15:24.973 }, 00:15:24.973 "base_bdevs_list": [ 00:15:24.973 { 00:15:24.973 "name": "spare", 00:15:24.973 "uuid": "a26af65b-1d5f-5983-9bd4-a5063d6fb4bc", 00:15:24.973 "is_configured": true, 00:15:24.973 "data_offset": 0, 00:15:24.973 "data_size": 65536 00:15:24.973 }, 00:15:24.973 { 00:15:24.973 "name": null, 00:15:24.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.973 "is_configured": false, 00:15:24.973 "data_offset": 0, 00:15:24.973 "data_size": 65536 00:15:24.973 }, 00:15:24.973 { 00:15:24.973 "name": "BaseBdev3", 00:15:24.973 "uuid": "edfbfd25-6f65-5574-ab7a-1bf582a012f3", 00:15:24.973 "is_configured": true, 00:15:24.973 "data_offset": 0, 00:15:24.973 "data_size": 65536 00:15:24.973 }, 00:15:24.973 { 00:15:24.973 "name": "BaseBdev4", 00:15:24.973 "uuid": "d8fefda8-6a6f-58e0-b18b-61a53bbf1a06", 00:15:24.973 "is_configured": true, 00:15:24.973 "data_offset": 0, 00:15:24.973 "data_size": 65536 00:15:24.973 } 00:15:24.973 ] 00:15:24.973 }' 00:15:24.973 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.231 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.231 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.231 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.231 04:06:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:26.166 04:06:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:26.166 04:06:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.166 04:06:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.166 04:06:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.166 04:06:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.166 04:06:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.166 04:06:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.166 04:06:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.166 04:06:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.166 04:06:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.166 04:06:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.166 04:06:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.166 "name": "raid_bdev1", 00:15:26.166 "uuid": "b9b81a55-9934-45c5-974d-a63a96924d0b", 00:15:26.166 "strip_size_kb": 0, 00:15:26.166 "state": "online", 00:15:26.166 "raid_level": "raid1", 00:15:26.166 "superblock": false, 00:15:26.166 "num_base_bdevs": 4, 00:15:26.166 "num_base_bdevs_discovered": 3, 00:15:26.166 "num_base_bdevs_operational": 3, 00:15:26.166 "process": { 00:15:26.166 "type": "rebuild", 00:15:26.166 "target": "spare", 00:15:26.166 "progress": { 00:15:26.166 "blocks": 49152, 00:15:26.166 "percent": 75 00:15:26.166 } 00:15:26.166 }, 00:15:26.166 "base_bdevs_list": [ 00:15:26.166 { 00:15:26.166 "name": "spare", 00:15:26.166 "uuid": "a26af65b-1d5f-5983-9bd4-a5063d6fb4bc", 00:15:26.166 "is_configured": true, 00:15:26.166 "data_offset": 0, 00:15:26.166 "data_size": 65536 00:15:26.166 }, 00:15:26.166 { 00:15:26.166 "name": null, 00:15:26.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.167 "is_configured": false, 00:15:26.167 "data_offset": 0, 00:15:26.167 "data_size": 65536 00:15:26.167 }, 00:15:26.167 { 00:15:26.167 "name": "BaseBdev3", 00:15:26.167 "uuid": "edfbfd25-6f65-5574-ab7a-1bf582a012f3", 00:15:26.167 "is_configured": true, 00:15:26.167 "data_offset": 0, 00:15:26.167 "data_size": 65536 00:15:26.167 }, 00:15:26.167 { 00:15:26.167 "name": "BaseBdev4", 00:15:26.167 "uuid": "d8fefda8-6a6f-58e0-b18b-61a53bbf1a06", 00:15:26.167 "is_configured": true, 00:15:26.167 "data_offset": 0, 00:15:26.167 "data_size": 65536 00:15:26.167 } 00:15:26.167 ] 00:15:26.167 }' 00:15:26.167 04:06:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.167 04:06:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.167 04:06:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.426 04:06:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.426 04:06:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:26.995 [2024-12-06 04:06:20.123270] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:26.995 [2024-12-06 04:06:20.123466] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:26.995 [2024-12-06 04:06:20.123533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.256 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:27.256 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.256 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.256 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.256 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.256 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.256 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.256 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.256 04:06:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.256 04:06:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.256 04:06:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.256 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.256 "name": "raid_bdev1", 00:15:27.256 "uuid": "b9b81a55-9934-45c5-974d-a63a96924d0b", 00:15:27.256 "strip_size_kb": 0, 00:15:27.256 "state": "online", 00:15:27.256 "raid_level": "raid1", 00:15:27.256 "superblock": false, 00:15:27.256 "num_base_bdevs": 4, 00:15:27.256 "num_base_bdevs_discovered": 3, 00:15:27.256 "num_base_bdevs_operational": 3, 00:15:27.256 "base_bdevs_list": [ 00:15:27.256 { 00:15:27.256 "name": "spare", 00:15:27.256 "uuid": "a26af65b-1d5f-5983-9bd4-a5063d6fb4bc", 00:15:27.256 "is_configured": true, 00:15:27.256 "data_offset": 0, 00:15:27.256 "data_size": 65536 00:15:27.256 }, 00:15:27.256 { 00:15:27.256 "name": null, 00:15:27.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.256 "is_configured": false, 00:15:27.256 "data_offset": 0, 00:15:27.256 "data_size": 65536 00:15:27.256 }, 00:15:27.256 { 00:15:27.256 "name": "BaseBdev3", 00:15:27.256 "uuid": "edfbfd25-6f65-5574-ab7a-1bf582a012f3", 00:15:27.256 "is_configured": true, 00:15:27.256 "data_offset": 0, 00:15:27.256 "data_size": 65536 00:15:27.256 }, 00:15:27.256 { 00:15:27.256 "name": "BaseBdev4", 00:15:27.256 "uuid": "d8fefda8-6a6f-58e0-b18b-61a53bbf1a06", 00:15:27.256 "is_configured": true, 00:15:27.256 "data_offset": 0, 00:15:27.256 "data_size": 65536 00:15:27.256 } 00:15:27.256 ] 00:15:27.256 }' 00:15:27.256 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.515 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:27.515 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.515 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:27.515 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:27.515 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:27.515 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.515 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:27.515 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.516 "name": "raid_bdev1", 00:15:27.516 "uuid": "b9b81a55-9934-45c5-974d-a63a96924d0b", 00:15:27.516 "strip_size_kb": 0, 00:15:27.516 "state": "online", 00:15:27.516 "raid_level": "raid1", 00:15:27.516 "superblock": false, 00:15:27.516 "num_base_bdevs": 4, 00:15:27.516 "num_base_bdevs_discovered": 3, 00:15:27.516 "num_base_bdevs_operational": 3, 00:15:27.516 "base_bdevs_list": [ 00:15:27.516 { 00:15:27.516 "name": "spare", 00:15:27.516 "uuid": "a26af65b-1d5f-5983-9bd4-a5063d6fb4bc", 00:15:27.516 "is_configured": true, 00:15:27.516 "data_offset": 0, 00:15:27.516 "data_size": 65536 00:15:27.516 }, 00:15:27.516 { 00:15:27.516 "name": null, 00:15:27.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.516 "is_configured": false, 00:15:27.516 "data_offset": 0, 00:15:27.516 "data_size": 65536 00:15:27.516 }, 00:15:27.516 { 00:15:27.516 "name": "BaseBdev3", 00:15:27.516 "uuid": "edfbfd25-6f65-5574-ab7a-1bf582a012f3", 00:15:27.516 "is_configured": true, 00:15:27.516 "data_offset": 0, 00:15:27.516 "data_size": 65536 00:15:27.516 }, 00:15:27.516 { 00:15:27.516 "name": "BaseBdev4", 00:15:27.516 "uuid": "d8fefda8-6a6f-58e0-b18b-61a53bbf1a06", 00:15:27.516 "is_configured": true, 00:15:27.516 "data_offset": 0, 00:15:27.516 "data_size": 65536 00:15:27.516 } 00:15:27.516 ] 00:15:27.516 }' 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.516 "name": "raid_bdev1", 00:15:27.516 "uuid": "b9b81a55-9934-45c5-974d-a63a96924d0b", 00:15:27.516 "strip_size_kb": 0, 00:15:27.516 "state": "online", 00:15:27.516 "raid_level": "raid1", 00:15:27.516 "superblock": false, 00:15:27.516 "num_base_bdevs": 4, 00:15:27.516 "num_base_bdevs_discovered": 3, 00:15:27.516 "num_base_bdevs_operational": 3, 00:15:27.516 "base_bdevs_list": [ 00:15:27.516 { 00:15:27.516 "name": "spare", 00:15:27.516 "uuid": "a26af65b-1d5f-5983-9bd4-a5063d6fb4bc", 00:15:27.516 "is_configured": true, 00:15:27.516 "data_offset": 0, 00:15:27.516 "data_size": 65536 00:15:27.516 }, 00:15:27.516 { 00:15:27.516 "name": null, 00:15:27.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.516 "is_configured": false, 00:15:27.516 "data_offset": 0, 00:15:27.516 "data_size": 65536 00:15:27.516 }, 00:15:27.516 { 00:15:27.516 "name": "BaseBdev3", 00:15:27.516 "uuid": "edfbfd25-6f65-5574-ab7a-1bf582a012f3", 00:15:27.516 "is_configured": true, 00:15:27.516 "data_offset": 0, 00:15:27.516 "data_size": 65536 00:15:27.516 }, 00:15:27.516 { 00:15:27.516 "name": "BaseBdev4", 00:15:27.516 "uuid": "d8fefda8-6a6f-58e0-b18b-61a53bbf1a06", 00:15:27.516 "is_configured": true, 00:15:27.516 "data_offset": 0, 00:15:27.516 "data_size": 65536 00:15:27.516 } 00:15:27.516 ] 00:15:27.516 }' 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.516 04:06:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.082 04:06:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:28.082 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.082 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.082 [2024-12-06 04:06:21.224155] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:28.082 [2024-12-06 04:06:21.224273] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:28.082 [2024-12-06 04:06:21.224409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.082 [2024-12-06 04:06:21.224553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:28.082 [2024-12-06 04:06:21.224609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:28.082 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.082 04:06:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.082 04:06:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:28.082 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.082 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.082 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.082 04:06:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:28.082 04:06:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:28.082 04:06:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:28.082 04:06:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:28.082 04:06:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:28.082 04:06:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:28.082 04:06:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:28.082 04:06:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:28.082 04:06:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:28.082 04:06:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:28.082 04:06:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:28.082 04:06:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:28.082 04:06:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:28.340 /dev/nbd0 00:15:28.340 04:06:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:28.340 04:06:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:28.340 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:28.340 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:28.340 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.340 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.340 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:28.340 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:28.340 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.340 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.340 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.340 1+0 records in 00:15:28.340 1+0 records out 00:15:28.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267689 s, 15.3 MB/s 00:15:28.340 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.340 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:28.340 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.340 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.340 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:28.340 04:06:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.340 04:06:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:28.340 04:06:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:28.598 /dev/nbd1 00:15:28.598 04:06:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:28.598 04:06:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:28.598 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:28.598 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:28.598 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.598 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.598 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:28.598 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:28.598 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.598 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.598 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.598 1+0 records in 00:15:28.598 1+0 records out 00:15:28.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469595 s, 8.7 MB/s 00:15:28.598 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.598 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:28.598 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.598 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.598 04:06:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:28.598 04:06:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.598 04:06:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:28.598 04:06:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:28.856 04:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:28.856 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:28.856 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:28.856 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:28.856 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:28.856 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:28.856 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:29.114 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:29.114 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:29.114 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:29.114 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:29.114 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:29.114 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:29.114 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:29.114 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:29.114 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:29.115 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:29.373 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:29.373 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:29.373 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:29.373 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:29.373 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:29.373 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:29.373 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:29.373 04:06:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:29.373 04:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:29.373 04:06:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77758 00:15:29.373 04:06:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77758 ']' 00:15:29.373 04:06:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77758 00:15:29.373 04:06:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:29.373 04:06:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.373 04:06:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77758 00:15:29.373 04:06:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:29.373 04:06:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:29.373 killing process with pid 77758 00:15:29.373 04:06:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77758' 00:15:29.373 04:06:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77758 00:15:29.373 Received shutdown signal, test time was about 60.000000 seconds 00:15:29.373 00:15:29.373 Latency(us) 00:15:29.373 [2024-12-06T04:06:22.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.373 [2024-12-06T04:06:22.727Z] =================================================================================================================== 00:15:29.373 [2024-12-06T04:06:22.727Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:29.373 04:06:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77758 00:15:29.373 [2024-12-06 04:06:22.560401] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:29.939 [2024-12-06 04:06:23.090581] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:31.315 00:15:31.315 real 0m19.015s 00:15:31.315 user 0m21.218s 00:15:31.315 sys 0m3.358s 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.315 ************************************ 00:15:31.315 END TEST raid_rebuild_test 00:15:31.315 ************************************ 00:15:31.315 04:06:24 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:15:31.315 04:06:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:31.315 04:06:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.315 04:06:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:31.315 ************************************ 00:15:31.315 START TEST raid_rebuild_test_sb 00:15:31.315 ************************************ 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78217 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78217 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78217 ']' 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:31.315 04:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.315 [2024-12-06 04:06:24.503697] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:15:31.315 [2024-12-06 04:06:24.503828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78217 ] 00:15:31.315 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:31.315 Zero copy mechanism will not be used. 00:15:31.574 [2024-12-06 04:06:24.684185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.574 [2024-12-06 04:06:24.817075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.833 [2024-12-06 04:06:25.038815] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.833 [2024-12-06 04:06:25.038897] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:32.091 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.091 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:32.091 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:32.091 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:32.091 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.091 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.091 BaseBdev1_malloc 00:15:32.091 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.091 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:32.091 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.091 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.091 [2024-12-06 04:06:25.422844] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:32.091 [2024-12-06 04:06:25.422920] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.091 [2024-12-06 04:06:25.422944] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:32.091 [2024-12-06 04:06:25.422956] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.091 [2024-12-06 04:06:25.425398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.091 [2024-12-06 04:06:25.425445] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:32.091 BaseBdev1 00:15:32.091 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.091 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:32.091 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:32.091 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.091 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.350 BaseBdev2_malloc 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.350 [2024-12-06 04:06:25.481390] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:32.350 [2024-12-06 04:06:25.481473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.350 [2024-12-06 04:06:25.481501] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:32.350 [2024-12-06 04:06:25.481515] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.350 [2024-12-06 04:06:25.483887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.350 [2024-12-06 04:06:25.483938] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:32.350 BaseBdev2 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.350 BaseBdev3_malloc 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.350 [2024-12-06 04:06:25.555785] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:32.350 [2024-12-06 04:06:25.555856] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.350 [2024-12-06 04:06:25.555883] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:32.350 [2024-12-06 04:06:25.555895] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.350 [2024-12-06 04:06:25.558367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.350 [2024-12-06 04:06:25.558409] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:32.350 BaseBdev3 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.350 BaseBdev4_malloc 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.350 [2024-12-06 04:06:25.614243] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:32.350 [2024-12-06 04:06:25.614326] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.350 [2024-12-06 04:06:25.614351] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:32.350 [2024-12-06 04:06:25.614363] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.350 [2024-12-06 04:06:25.616715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.350 [2024-12-06 04:06:25.616763] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:32.350 BaseBdev4 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.350 spare_malloc 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.350 spare_delay 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.350 [2024-12-06 04:06:25.681479] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:32.350 [2024-12-06 04:06:25.681554] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.350 [2024-12-06 04:06:25.681578] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:32.350 [2024-12-06 04:06:25.681590] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.350 [2024-12-06 04:06:25.684040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.350 [2024-12-06 04:06:25.684092] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:32.350 spare 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.350 [2024-12-06 04:06:25.693507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.350 [2024-12-06 04:06:25.695519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:32.350 [2024-12-06 04:06:25.695592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:32.350 [2024-12-06 04:06:25.695646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:32.350 [2024-12-06 04:06:25.695882] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:32.350 [2024-12-06 04:06:25.695908] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:32.350 [2024-12-06 04:06:25.696235] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:32.350 [2024-12-06 04:06:25.696465] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:32.350 [2024-12-06 04:06:25.696484] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:32.350 [2024-12-06 04:06:25.696693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.350 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.608 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.608 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.608 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.608 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.608 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.608 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.608 "name": "raid_bdev1", 00:15:32.608 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:32.608 "strip_size_kb": 0, 00:15:32.608 "state": "online", 00:15:32.608 "raid_level": "raid1", 00:15:32.608 "superblock": true, 00:15:32.608 "num_base_bdevs": 4, 00:15:32.608 "num_base_bdevs_discovered": 4, 00:15:32.608 "num_base_bdevs_operational": 4, 00:15:32.608 "base_bdevs_list": [ 00:15:32.608 { 00:15:32.608 "name": "BaseBdev1", 00:15:32.608 "uuid": "ceef28f6-97c9-507f-a7b9-71799923ccff", 00:15:32.608 "is_configured": true, 00:15:32.608 "data_offset": 2048, 00:15:32.608 "data_size": 63488 00:15:32.608 }, 00:15:32.608 { 00:15:32.608 "name": "BaseBdev2", 00:15:32.608 "uuid": "b4c9edb7-3507-509a-aa62-6bbcc76bdd97", 00:15:32.608 "is_configured": true, 00:15:32.608 "data_offset": 2048, 00:15:32.608 "data_size": 63488 00:15:32.608 }, 00:15:32.608 { 00:15:32.608 "name": "BaseBdev3", 00:15:32.608 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:32.608 "is_configured": true, 00:15:32.608 "data_offset": 2048, 00:15:32.608 "data_size": 63488 00:15:32.608 }, 00:15:32.608 { 00:15:32.608 "name": "BaseBdev4", 00:15:32.608 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:32.608 "is_configured": true, 00:15:32.608 "data_offset": 2048, 00:15:32.608 "data_size": 63488 00:15:32.608 } 00:15:32.608 ] 00:15:32.608 }' 00:15:32.608 04:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.608 04:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.866 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:32.866 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:32.866 04:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.866 04:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.866 [2024-12-06 04:06:26.153160] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.866 04:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.866 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:32.866 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.866 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:32.866 04:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.866 04:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.866 04:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.125 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:33.125 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:33.125 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:33.125 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:33.125 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:33.125 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:33.125 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:33.125 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:33.125 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:33.125 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:33.125 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:33.125 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:33.125 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:33.125 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:33.125 [2024-12-06 04:06:26.452345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:33.125 /dev/nbd0 00:15:33.384 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:33.384 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:33.384 04:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:33.384 04:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:33.384 04:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:33.384 04:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:33.384 04:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:33.384 04:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:33.384 04:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:33.384 04:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:33.384 04:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.384 1+0 records in 00:15:33.384 1+0 records out 00:15:33.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563107 s, 7.3 MB/s 00:15:33.384 04:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.384 04:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:33.384 04:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.384 04:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:33.384 04:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:33.384 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:33.384 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:33.384 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:33.384 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:33.384 04:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:39.951 63488+0 records in 00:15:39.951 63488+0 records out 00:15:39.951 32505856 bytes (33 MB, 31 MiB) copied, 6.38529 s, 5.1 MB/s 00:15:39.951 04:06:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:39.951 04:06:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:39.951 04:06:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:39.951 04:06:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:39.951 04:06:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:39.951 04:06:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.951 04:06:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:39.951 [2024-12-06 04:06:33.148813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.951 [2024-12-06 04:06:33.197211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.951 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.951 "name": "raid_bdev1", 00:15:39.951 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:39.951 "strip_size_kb": 0, 00:15:39.951 "state": "online", 00:15:39.951 "raid_level": "raid1", 00:15:39.951 "superblock": true, 00:15:39.951 "num_base_bdevs": 4, 00:15:39.951 "num_base_bdevs_discovered": 3, 00:15:39.951 "num_base_bdevs_operational": 3, 00:15:39.951 "base_bdevs_list": [ 00:15:39.951 { 00:15:39.951 "name": null, 00:15:39.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.951 "is_configured": false, 00:15:39.951 "data_offset": 0, 00:15:39.951 "data_size": 63488 00:15:39.952 }, 00:15:39.952 { 00:15:39.952 "name": "BaseBdev2", 00:15:39.952 "uuid": "b4c9edb7-3507-509a-aa62-6bbcc76bdd97", 00:15:39.952 "is_configured": true, 00:15:39.952 "data_offset": 2048, 00:15:39.952 "data_size": 63488 00:15:39.952 }, 00:15:39.952 { 00:15:39.952 "name": "BaseBdev3", 00:15:39.952 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:39.952 "is_configured": true, 00:15:39.952 "data_offset": 2048, 00:15:39.952 "data_size": 63488 00:15:39.952 }, 00:15:39.952 { 00:15:39.952 "name": "BaseBdev4", 00:15:39.952 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:39.952 "is_configured": true, 00:15:39.952 "data_offset": 2048, 00:15:39.952 "data_size": 63488 00:15:39.952 } 00:15:39.952 ] 00:15:39.952 }' 00:15:39.952 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.952 04:06:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.517 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:40.517 04:06:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.517 04:06:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.517 [2024-12-06 04:06:33.624643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:40.517 [2024-12-06 04:06:33.643487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:15:40.517 04:06:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.517 04:06:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:40.517 [2024-12-06 04:06:33.645738] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:41.452 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.452 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.452 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.452 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.452 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.452 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.452 04:06:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.452 04:06:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.452 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.452 04:06:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.452 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.452 "name": "raid_bdev1", 00:15:41.452 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:41.452 "strip_size_kb": 0, 00:15:41.452 "state": "online", 00:15:41.452 "raid_level": "raid1", 00:15:41.452 "superblock": true, 00:15:41.452 "num_base_bdevs": 4, 00:15:41.452 "num_base_bdevs_discovered": 4, 00:15:41.452 "num_base_bdevs_operational": 4, 00:15:41.452 "process": { 00:15:41.452 "type": "rebuild", 00:15:41.452 "target": "spare", 00:15:41.452 "progress": { 00:15:41.452 "blocks": 20480, 00:15:41.452 "percent": 32 00:15:41.452 } 00:15:41.452 }, 00:15:41.452 "base_bdevs_list": [ 00:15:41.452 { 00:15:41.452 "name": "spare", 00:15:41.452 "uuid": "00c24a51-0d33-5d60-b07a-ab9da5daf1e7", 00:15:41.452 "is_configured": true, 00:15:41.452 "data_offset": 2048, 00:15:41.453 "data_size": 63488 00:15:41.453 }, 00:15:41.453 { 00:15:41.453 "name": "BaseBdev2", 00:15:41.453 "uuid": "b4c9edb7-3507-509a-aa62-6bbcc76bdd97", 00:15:41.453 "is_configured": true, 00:15:41.453 "data_offset": 2048, 00:15:41.453 "data_size": 63488 00:15:41.453 }, 00:15:41.453 { 00:15:41.453 "name": "BaseBdev3", 00:15:41.453 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:41.453 "is_configured": true, 00:15:41.453 "data_offset": 2048, 00:15:41.453 "data_size": 63488 00:15:41.453 }, 00:15:41.453 { 00:15:41.453 "name": "BaseBdev4", 00:15:41.453 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:41.453 "is_configured": true, 00:15:41.453 "data_offset": 2048, 00:15:41.453 "data_size": 63488 00:15:41.453 } 00:15:41.453 ] 00:15:41.453 }' 00:15:41.453 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.453 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.453 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.453 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.453 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:41.453 04:06:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.453 04:06:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.453 [2024-12-06 04:06:34.785008] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:41.711 [2024-12-06 04:06:34.852130] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:41.711 [2024-12-06 04:06:34.852230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.711 [2024-12-06 04:06:34.852249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:41.711 [2024-12-06 04:06:34.852259] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:41.711 04:06:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.711 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:41.711 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.711 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.712 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.712 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.712 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.712 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.712 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.712 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.712 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.712 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.712 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.712 04:06:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.712 04:06:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.712 04:06:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.712 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.712 "name": "raid_bdev1", 00:15:41.712 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:41.712 "strip_size_kb": 0, 00:15:41.712 "state": "online", 00:15:41.712 "raid_level": "raid1", 00:15:41.712 "superblock": true, 00:15:41.712 "num_base_bdevs": 4, 00:15:41.712 "num_base_bdevs_discovered": 3, 00:15:41.712 "num_base_bdevs_operational": 3, 00:15:41.712 "base_bdevs_list": [ 00:15:41.712 { 00:15:41.712 "name": null, 00:15:41.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.712 "is_configured": false, 00:15:41.712 "data_offset": 0, 00:15:41.712 "data_size": 63488 00:15:41.712 }, 00:15:41.712 { 00:15:41.712 "name": "BaseBdev2", 00:15:41.712 "uuid": "b4c9edb7-3507-509a-aa62-6bbcc76bdd97", 00:15:41.712 "is_configured": true, 00:15:41.712 "data_offset": 2048, 00:15:41.712 "data_size": 63488 00:15:41.712 }, 00:15:41.712 { 00:15:41.712 "name": "BaseBdev3", 00:15:41.712 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:41.712 "is_configured": true, 00:15:41.712 "data_offset": 2048, 00:15:41.712 "data_size": 63488 00:15:41.712 }, 00:15:41.712 { 00:15:41.712 "name": "BaseBdev4", 00:15:41.712 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:41.712 "is_configured": true, 00:15:41.712 "data_offset": 2048, 00:15:41.712 "data_size": 63488 00:15:41.712 } 00:15:41.712 ] 00:15:41.712 }' 00:15:41.712 04:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.712 04:06:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.971 04:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:41.971 04:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.971 04:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:41.971 04:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:41.971 04:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.971 04:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.971 04:06:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.971 04:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.971 04:06:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.971 04:06:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.971 04:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.971 "name": "raid_bdev1", 00:15:41.971 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:41.971 "strip_size_kb": 0, 00:15:41.971 "state": "online", 00:15:41.971 "raid_level": "raid1", 00:15:41.971 "superblock": true, 00:15:41.971 "num_base_bdevs": 4, 00:15:41.971 "num_base_bdevs_discovered": 3, 00:15:41.971 "num_base_bdevs_operational": 3, 00:15:41.971 "base_bdevs_list": [ 00:15:41.971 { 00:15:41.971 "name": null, 00:15:41.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.971 "is_configured": false, 00:15:41.971 "data_offset": 0, 00:15:41.971 "data_size": 63488 00:15:41.971 }, 00:15:41.971 { 00:15:41.971 "name": "BaseBdev2", 00:15:41.971 "uuid": "b4c9edb7-3507-509a-aa62-6bbcc76bdd97", 00:15:41.971 "is_configured": true, 00:15:41.971 "data_offset": 2048, 00:15:41.971 "data_size": 63488 00:15:41.971 }, 00:15:41.971 { 00:15:41.971 "name": "BaseBdev3", 00:15:41.971 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:41.971 "is_configured": true, 00:15:41.971 "data_offset": 2048, 00:15:41.971 "data_size": 63488 00:15:41.971 }, 00:15:41.971 { 00:15:41.971 "name": "BaseBdev4", 00:15:41.971 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:41.971 "is_configured": true, 00:15:41.971 "data_offset": 2048, 00:15:41.971 "data_size": 63488 00:15:41.971 } 00:15:41.971 ] 00:15:41.971 }' 00:15:42.230 04:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.230 04:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:42.230 04:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.230 04:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:42.230 04:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:42.230 04:06:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.230 04:06:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.230 [2024-12-06 04:06:35.418096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.230 [2024-12-06 04:06:35.434415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:15:42.230 04:06:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.230 04:06:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:42.230 [2024-12-06 04:06:35.436606] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:43.167 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.167 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.167 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.167 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.167 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.167 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.167 04:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.167 04:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.167 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.167 04:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.167 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.167 "name": "raid_bdev1", 00:15:43.167 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:43.167 "strip_size_kb": 0, 00:15:43.167 "state": "online", 00:15:43.167 "raid_level": "raid1", 00:15:43.167 "superblock": true, 00:15:43.167 "num_base_bdevs": 4, 00:15:43.167 "num_base_bdevs_discovered": 4, 00:15:43.167 "num_base_bdevs_operational": 4, 00:15:43.167 "process": { 00:15:43.167 "type": "rebuild", 00:15:43.167 "target": "spare", 00:15:43.167 "progress": { 00:15:43.167 "blocks": 20480, 00:15:43.167 "percent": 32 00:15:43.167 } 00:15:43.167 }, 00:15:43.167 "base_bdevs_list": [ 00:15:43.167 { 00:15:43.167 "name": "spare", 00:15:43.167 "uuid": "00c24a51-0d33-5d60-b07a-ab9da5daf1e7", 00:15:43.167 "is_configured": true, 00:15:43.167 "data_offset": 2048, 00:15:43.167 "data_size": 63488 00:15:43.167 }, 00:15:43.167 { 00:15:43.167 "name": "BaseBdev2", 00:15:43.167 "uuid": "b4c9edb7-3507-509a-aa62-6bbcc76bdd97", 00:15:43.167 "is_configured": true, 00:15:43.167 "data_offset": 2048, 00:15:43.167 "data_size": 63488 00:15:43.167 }, 00:15:43.167 { 00:15:43.167 "name": "BaseBdev3", 00:15:43.167 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:43.167 "is_configured": true, 00:15:43.167 "data_offset": 2048, 00:15:43.167 "data_size": 63488 00:15:43.167 }, 00:15:43.167 { 00:15:43.167 "name": "BaseBdev4", 00:15:43.167 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:43.167 "is_configured": true, 00:15:43.167 "data_offset": 2048, 00:15:43.167 "data_size": 63488 00:15:43.167 } 00:15:43.167 ] 00:15:43.167 }' 00:15:43.167 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:43.426 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.426 [2024-12-06 04:06:36.591541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:43.426 [2024-12-06 04:06:36.742496] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.426 04:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.684 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.684 "name": "raid_bdev1", 00:15:43.684 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:43.684 "strip_size_kb": 0, 00:15:43.684 "state": "online", 00:15:43.684 "raid_level": "raid1", 00:15:43.684 "superblock": true, 00:15:43.684 "num_base_bdevs": 4, 00:15:43.684 "num_base_bdevs_discovered": 3, 00:15:43.684 "num_base_bdevs_operational": 3, 00:15:43.684 "process": { 00:15:43.684 "type": "rebuild", 00:15:43.684 "target": "spare", 00:15:43.684 "progress": { 00:15:43.684 "blocks": 24576, 00:15:43.684 "percent": 38 00:15:43.684 } 00:15:43.684 }, 00:15:43.684 "base_bdevs_list": [ 00:15:43.684 { 00:15:43.684 "name": "spare", 00:15:43.684 "uuid": "00c24a51-0d33-5d60-b07a-ab9da5daf1e7", 00:15:43.684 "is_configured": true, 00:15:43.684 "data_offset": 2048, 00:15:43.684 "data_size": 63488 00:15:43.684 }, 00:15:43.684 { 00:15:43.684 "name": null, 00:15:43.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.684 "is_configured": false, 00:15:43.684 "data_offset": 0, 00:15:43.684 "data_size": 63488 00:15:43.684 }, 00:15:43.684 { 00:15:43.684 "name": "BaseBdev3", 00:15:43.684 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:43.684 "is_configured": true, 00:15:43.684 "data_offset": 2048, 00:15:43.684 "data_size": 63488 00:15:43.684 }, 00:15:43.684 { 00:15:43.684 "name": "BaseBdev4", 00:15:43.684 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:43.684 "is_configured": true, 00:15:43.684 "data_offset": 2048, 00:15:43.684 "data_size": 63488 00:15:43.684 } 00:15:43.684 ] 00:15:43.684 }' 00:15:43.684 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.684 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.684 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.684 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.684 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=474 00:15:43.684 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:43.684 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.684 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.684 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.684 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.684 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.684 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.684 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.684 04:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.684 04:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.684 04:06:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.684 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.684 "name": "raid_bdev1", 00:15:43.684 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:43.684 "strip_size_kb": 0, 00:15:43.684 "state": "online", 00:15:43.684 "raid_level": "raid1", 00:15:43.684 "superblock": true, 00:15:43.684 "num_base_bdevs": 4, 00:15:43.684 "num_base_bdevs_discovered": 3, 00:15:43.684 "num_base_bdevs_operational": 3, 00:15:43.684 "process": { 00:15:43.684 "type": "rebuild", 00:15:43.684 "target": "spare", 00:15:43.684 "progress": { 00:15:43.684 "blocks": 26624, 00:15:43.684 "percent": 41 00:15:43.684 } 00:15:43.684 }, 00:15:43.684 "base_bdevs_list": [ 00:15:43.684 { 00:15:43.684 "name": "spare", 00:15:43.684 "uuid": "00c24a51-0d33-5d60-b07a-ab9da5daf1e7", 00:15:43.684 "is_configured": true, 00:15:43.684 "data_offset": 2048, 00:15:43.684 "data_size": 63488 00:15:43.684 }, 00:15:43.684 { 00:15:43.684 "name": null, 00:15:43.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.684 "is_configured": false, 00:15:43.684 "data_offset": 0, 00:15:43.684 "data_size": 63488 00:15:43.684 }, 00:15:43.684 { 00:15:43.684 "name": "BaseBdev3", 00:15:43.684 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:43.684 "is_configured": true, 00:15:43.684 "data_offset": 2048, 00:15:43.684 "data_size": 63488 00:15:43.684 }, 00:15:43.684 { 00:15:43.684 "name": "BaseBdev4", 00:15:43.684 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:43.684 "is_configured": true, 00:15:43.684 "data_offset": 2048, 00:15:43.684 "data_size": 63488 00:15:43.684 } 00:15:43.684 ] 00:15:43.684 }' 00:15:43.684 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.684 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.684 04:06:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.684 04:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.684 04:06:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:45.057 04:06:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:45.057 04:06:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.057 04:06:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.057 04:06:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.057 04:06:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.057 04:06:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.057 04:06:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.057 04:06:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.057 04:06:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.057 04:06:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.057 04:06:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.057 04:06:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.057 "name": "raid_bdev1", 00:15:45.057 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:45.057 "strip_size_kb": 0, 00:15:45.057 "state": "online", 00:15:45.057 "raid_level": "raid1", 00:15:45.057 "superblock": true, 00:15:45.057 "num_base_bdevs": 4, 00:15:45.057 "num_base_bdevs_discovered": 3, 00:15:45.057 "num_base_bdevs_operational": 3, 00:15:45.057 "process": { 00:15:45.057 "type": "rebuild", 00:15:45.057 "target": "spare", 00:15:45.057 "progress": { 00:15:45.057 "blocks": 49152, 00:15:45.057 "percent": 77 00:15:45.057 } 00:15:45.057 }, 00:15:45.057 "base_bdevs_list": [ 00:15:45.057 { 00:15:45.057 "name": "spare", 00:15:45.057 "uuid": "00c24a51-0d33-5d60-b07a-ab9da5daf1e7", 00:15:45.057 "is_configured": true, 00:15:45.057 "data_offset": 2048, 00:15:45.057 "data_size": 63488 00:15:45.057 }, 00:15:45.057 { 00:15:45.057 "name": null, 00:15:45.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.057 "is_configured": false, 00:15:45.057 "data_offset": 0, 00:15:45.058 "data_size": 63488 00:15:45.058 }, 00:15:45.058 { 00:15:45.058 "name": "BaseBdev3", 00:15:45.058 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:45.058 "is_configured": true, 00:15:45.058 "data_offset": 2048, 00:15:45.058 "data_size": 63488 00:15:45.058 }, 00:15:45.058 { 00:15:45.058 "name": "BaseBdev4", 00:15:45.058 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:45.058 "is_configured": true, 00:15:45.058 "data_offset": 2048, 00:15:45.058 "data_size": 63488 00:15:45.058 } 00:15:45.058 ] 00:15:45.058 }' 00:15:45.058 04:06:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.058 04:06:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.058 04:06:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.058 04:06:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.058 04:06:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:45.316 [2024-12-06 04:06:38.651927] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:45.316 [2024-12-06 04:06:38.652157] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:45.317 [2024-12-06 04:06:38.652365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.884 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:45.884 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.884 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.884 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.884 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.884 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.884 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.884 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.884 04:06:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.884 04:06:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.884 04:06:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.884 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.884 "name": "raid_bdev1", 00:15:45.884 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:45.884 "strip_size_kb": 0, 00:15:45.884 "state": "online", 00:15:45.884 "raid_level": "raid1", 00:15:45.884 "superblock": true, 00:15:45.884 "num_base_bdevs": 4, 00:15:45.884 "num_base_bdevs_discovered": 3, 00:15:45.884 "num_base_bdevs_operational": 3, 00:15:45.884 "base_bdevs_list": [ 00:15:45.884 { 00:15:45.884 "name": "spare", 00:15:45.884 "uuid": "00c24a51-0d33-5d60-b07a-ab9da5daf1e7", 00:15:45.884 "is_configured": true, 00:15:45.884 "data_offset": 2048, 00:15:45.884 "data_size": 63488 00:15:45.884 }, 00:15:45.884 { 00:15:45.884 "name": null, 00:15:45.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.884 "is_configured": false, 00:15:45.884 "data_offset": 0, 00:15:45.884 "data_size": 63488 00:15:45.884 }, 00:15:45.884 { 00:15:45.884 "name": "BaseBdev3", 00:15:45.884 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:45.884 "is_configured": true, 00:15:45.884 "data_offset": 2048, 00:15:45.884 "data_size": 63488 00:15:45.884 }, 00:15:45.884 { 00:15:45.884 "name": "BaseBdev4", 00:15:45.884 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:45.884 "is_configured": true, 00:15:45.884 "data_offset": 2048, 00:15:45.884 "data_size": 63488 00:15:45.884 } 00:15:45.884 ] 00:15:45.884 }' 00:15:45.884 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.143 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:46.143 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.143 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:46.143 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:46.143 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:46.143 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.143 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:46.143 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:46.143 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.143 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.143 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.143 04:06:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.143 04:06:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.143 04:06:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.143 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.143 "name": "raid_bdev1", 00:15:46.143 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:46.143 "strip_size_kb": 0, 00:15:46.143 "state": "online", 00:15:46.143 "raid_level": "raid1", 00:15:46.143 "superblock": true, 00:15:46.143 "num_base_bdevs": 4, 00:15:46.143 "num_base_bdevs_discovered": 3, 00:15:46.143 "num_base_bdevs_operational": 3, 00:15:46.143 "base_bdevs_list": [ 00:15:46.143 { 00:15:46.143 "name": "spare", 00:15:46.144 "uuid": "00c24a51-0d33-5d60-b07a-ab9da5daf1e7", 00:15:46.144 "is_configured": true, 00:15:46.144 "data_offset": 2048, 00:15:46.144 "data_size": 63488 00:15:46.144 }, 00:15:46.144 { 00:15:46.144 "name": null, 00:15:46.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.144 "is_configured": false, 00:15:46.144 "data_offset": 0, 00:15:46.144 "data_size": 63488 00:15:46.144 }, 00:15:46.144 { 00:15:46.144 "name": "BaseBdev3", 00:15:46.144 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:46.144 "is_configured": true, 00:15:46.144 "data_offset": 2048, 00:15:46.144 "data_size": 63488 00:15:46.144 }, 00:15:46.144 { 00:15:46.144 "name": "BaseBdev4", 00:15:46.144 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:46.144 "is_configured": true, 00:15:46.144 "data_offset": 2048, 00:15:46.144 "data_size": 63488 00:15:46.144 } 00:15:46.144 ] 00:15:46.144 }' 00:15:46.144 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.144 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:46.144 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.144 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:46.144 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:46.144 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.144 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.144 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.144 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.144 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.144 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.144 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.144 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.144 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.144 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.144 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.144 04:06:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.144 04:06:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.144 04:06:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.144 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.144 "name": "raid_bdev1", 00:15:46.144 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:46.144 "strip_size_kb": 0, 00:15:46.144 "state": "online", 00:15:46.144 "raid_level": "raid1", 00:15:46.144 "superblock": true, 00:15:46.144 "num_base_bdevs": 4, 00:15:46.144 "num_base_bdevs_discovered": 3, 00:15:46.144 "num_base_bdevs_operational": 3, 00:15:46.144 "base_bdevs_list": [ 00:15:46.144 { 00:15:46.144 "name": "spare", 00:15:46.144 "uuid": "00c24a51-0d33-5d60-b07a-ab9da5daf1e7", 00:15:46.144 "is_configured": true, 00:15:46.144 "data_offset": 2048, 00:15:46.144 "data_size": 63488 00:15:46.144 }, 00:15:46.144 { 00:15:46.144 "name": null, 00:15:46.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.144 "is_configured": false, 00:15:46.144 "data_offset": 0, 00:15:46.144 "data_size": 63488 00:15:46.144 }, 00:15:46.144 { 00:15:46.144 "name": "BaseBdev3", 00:15:46.144 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:46.144 "is_configured": true, 00:15:46.144 "data_offset": 2048, 00:15:46.144 "data_size": 63488 00:15:46.144 }, 00:15:46.144 { 00:15:46.144 "name": "BaseBdev4", 00:15:46.144 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:46.144 "is_configured": true, 00:15:46.144 "data_offset": 2048, 00:15:46.144 "data_size": 63488 00:15:46.144 } 00:15:46.144 ] 00:15:46.144 }' 00:15:46.144 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.144 04:06:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.713 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:46.713 04:06:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.713 04:06:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.713 [2024-12-06 04:06:39.861744] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:46.713 [2024-12-06 04:06:39.861866] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.713 [2024-12-06 04:06:39.861991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.713 [2024-12-06 04:06:39.862119] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.713 [2024-12-06 04:06:39.862176] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:46.713 04:06:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.713 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:46.713 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.713 04:06:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.713 04:06:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.713 04:06:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.713 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:46.713 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:46.713 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:46.713 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:46.713 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:46.713 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:46.713 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:46.713 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:46.713 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:46.713 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:46.713 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:46.713 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:46.713 04:06:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:46.974 /dev/nbd0 00:15:46.974 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:46.974 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:46.974 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:46.974 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:46.974 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:46.974 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:46.974 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:46.974 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:46.974 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:46.974 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:46.974 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:46.974 1+0 records in 00:15:46.974 1+0 records out 00:15:46.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388367 s, 10.5 MB/s 00:15:46.974 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.974 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:46.974 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.974 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:46.974 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:46.974 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:46.974 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:46.974 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:47.234 /dev/nbd1 00:15:47.234 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:47.234 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:47.234 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:47.234 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:47.234 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:47.234 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:47.234 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:47.234 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:47.234 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:47.234 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:47.234 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.234 1+0 records in 00:15:47.234 1+0 records out 00:15:47.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463667 s, 8.8 MB/s 00:15:47.234 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.234 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:47.234 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.234 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:47.234 04:06:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:47.234 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.234 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:47.234 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:47.494 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:47.494 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:47.494 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:47.494 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:47.494 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:47.494 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.494 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:47.753 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:47.753 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:47.753 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:47.753 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.753 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.753 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:47.753 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:47.753 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.753 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.753 04:06:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.013 [2024-12-06 04:06:41.137954] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:48.013 [2024-12-06 04:06:41.138022] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.013 [2024-12-06 04:06:41.138060] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:48.013 [2024-12-06 04:06:41.138072] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.013 [2024-12-06 04:06:41.140609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.013 [2024-12-06 04:06:41.140659] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:48.013 [2024-12-06 04:06:41.140778] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:48.013 [2024-12-06 04:06:41.140846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:48.013 [2024-12-06 04:06:41.141002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:48.013 [2024-12-06 04:06:41.141132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:48.013 spare 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.013 [2024-12-06 04:06:41.241072] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:48.013 [2024-12-06 04:06:41.241134] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:48.013 [2024-12-06 04:06:41.241550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:15:48.013 [2024-12-06 04:06:41.241799] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:48.013 [2024-12-06 04:06:41.241823] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:48.013 [2024-12-06 04:06:41.242077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.013 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.013 "name": "raid_bdev1", 00:15:48.013 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:48.013 "strip_size_kb": 0, 00:15:48.013 "state": "online", 00:15:48.013 "raid_level": "raid1", 00:15:48.013 "superblock": true, 00:15:48.013 "num_base_bdevs": 4, 00:15:48.013 "num_base_bdevs_discovered": 3, 00:15:48.013 "num_base_bdevs_operational": 3, 00:15:48.013 "base_bdevs_list": [ 00:15:48.013 { 00:15:48.013 "name": "spare", 00:15:48.013 "uuid": "00c24a51-0d33-5d60-b07a-ab9da5daf1e7", 00:15:48.013 "is_configured": true, 00:15:48.013 "data_offset": 2048, 00:15:48.013 "data_size": 63488 00:15:48.013 }, 00:15:48.013 { 00:15:48.013 "name": null, 00:15:48.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.013 "is_configured": false, 00:15:48.013 "data_offset": 2048, 00:15:48.013 "data_size": 63488 00:15:48.013 }, 00:15:48.013 { 00:15:48.013 "name": "BaseBdev3", 00:15:48.013 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:48.013 "is_configured": true, 00:15:48.013 "data_offset": 2048, 00:15:48.013 "data_size": 63488 00:15:48.013 }, 00:15:48.013 { 00:15:48.013 "name": "BaseBdev4", 00:15:48.014 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:48.014 "is_configured": true, 00:15:48.014 "data_offset": 2048, 00:15:48.014 "data_size": 63488 00:15:48.014 } 00:15:48.014 ] 00:15:48.014 }' 00:15:48.014 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.014 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.583 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:48.583 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.583 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:48.583 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:48.583 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.583 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.583 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.583 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.583 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.583 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.583 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.583 "name": "raid_bdev1", 00:15:48.583 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:48.583 "strip_size_kb": 0, 00:15:48.583 "state": "online", 00:15:48.583 "raid_level": "raid1", 00:15:48.583 "superblock": true, 00:15:48.583 "num_base_bdevs": 4, 00:15:48.583 "num_base_bdevs_discovered": 3, 00:15:48.583 "num_base_bdevs_operational": 3, 00:15:48.583 "base_bdevs_list": [ 00:15:48.583 { 00:15:48.583 "name": "spare", 00:15:48.583 "uuid": "00c24a51-0d33-5d60-b07a-ab9da5daf1e7", 00:15:48.583 "is_configured": true, 00:15:48.583 "data_offset": 2048, 00:15:48.583 "data_size": 63488 00:15:48.583 }, 00:15:48.583 { 00:15:48.584 "name": null, 00:15:48.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.584 "is_configured": false, 00:15:48.584 "data_offset": 2048, 00:15:48.584 "data_size": 63488 00:15:48.584 }, 00:15:48.584 { 00:15:48.584 "name": "BaseBdev3", 00:15:48.584 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:48.584 "is_configured": true, 00:15:48.584 "data_offset": 2048, 00:15:48.584 "data_size": 63488 00:15:48.584 }, 00:15:48.584 { 00:15:48.584 "name": "BaseBdev4", 00:15:48.584 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:48.584 "is_configured": true, 00:15:48.584 "data_offset": 2048, 00:15:48.584 "data_size": 63488 00:15:48.584 } 00:15:48.584 ] 00:15:48.584 }' 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.584 [2024-12-06 04:06:41.913004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.584 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.844 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.844 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.844 "name": "raid_bdev1", 00:15:48.844 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:48.844 "strip_size_kb": 0, 00:15:48.844 "state": "online", 00:15:48.844 "raid_level": "raid1", 00:15:48.844 "superblock": true, 00:15:48.844 "num_base_bdevs": 4, 00:15:48.844 "num_base_bdevs_discovered": 2, 00:15:48.844 "num_base_bdevs_operational": 2, 00:15:48.844 "base_bdevs_list": [ 00:15:48.844 { 00:15:48.844 "name": null, 00:15:48.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.844 "is_configured": false, 00:15:48.844 "data_offset": 0, 00:15:48.844 "data_size": 63488 00:15:48.844 }, 00:15:48.844 { 00:15:48.844 "name": null, 00:15:48.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.844 "is_configured": false, 00:15:48.844 "data_offset": 2048, 00:15:48.844 "data_size": 63488 00:15:48.844 }, 00:15:48.844 { 00:15:48.844 "name": "BaseBdev3", 00:15:48.844 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:48.844 "is_configured": true, 00:15:48.844 "data_offset": 2048, 00:15:48.844 "data_size": 63488 00:15:48.844 }, 00:15:48.844 { 00:15:48.844 "name": "BaseBdev4", 00:15:48.844 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:48.844 "is_configured": true, 00:15:48.844 "data_offset": 2048, 00:15:48.844 "data_size": 63488 00:15:48.844 } 00:15:48.844 ] 00:15:48.844 }' 00:15:48.844 04:06:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.844 04:06:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.103 04:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:49.103 04:06:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.103 04:06:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.103 [2024-12-06 04:06:42.392216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:49.104 [2024-12-06 04:06:42.392456] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:49.104 [2024-12-06 04:06:42.392490] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:49.104 [2024-12-06 04:06:42.392540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:49.104 [2024-12-06 04:06:42.408789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:15:49.104 04:06:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.104 04:06:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:49.104 [2024-12-06 04:06:42.411001] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:50.109 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.109 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.109 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.109 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.109 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.109 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.109 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.109 04:06:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.109 04:06:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.369 04:06:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.369 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.369 "name": "raid_bdev1", 00:15:50.369 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:50.369 "strip_size_kb": 0, 00:15:50.369 "state": "online", 00:15:50.369 "raid_level": "raid1", 00:15:50.369 "superblock": true, 00:15:50.369 "num_base_bdevs": 4, 00:15:50.369 "num_base_bdevs_discovered": 3, 00:15:50.369 "num_base_bdevs_operational": 3, 00:15:50.369 "process": { 00:15:50.369 "type": "rebuild", 00:15:50.369 "target": "spare", 00:15:50.369 "progress": { 00:15:50.369 "blocks": 20480, 00:15:50.369 "percent": 32 00:15:50.369 } 00:15:50.369 }, 00:15:50.369 "base_bdevs_list": [ 00:15:50.369 { 00:15:50.369 "name": "spare", 00:15:50.369 "uuid": "00c24a51-0d33-5d60-b07a-ab9da5daf1e7", 00:15:50.369 "is_configured": true, 00:15:50.369 "data_offset": 2048, 00:15:50.369 "data_size": 63488 00:15:50.369 }, 00:15:50.369 { 00:15:50.369 "name": null, 00:15:50.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.369 "is_configured": false, 00:15:50.369 "data_offset": 2048, 00:15:50.369 "data_size": 63488 00:15:50.369 }, 00:15:50.369 { 00:15:50.369 "name": "BaseBdev3", 00:15:50.369 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:50.369 "is_configured": true, 00:15:50.369 "data_offset": 2048, 00:15:50.369 "data_size": 63488 00:15:50.369 }, 00:15:50.369 { 00:15:50.369 "name": "BaseBdev4", 00:15:50.369 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:50.369 "is_configured": true, 00:15:50.369 "data_offset": 2048, 00:15:50.369 "data_size": 63488 00:15:50.369 } 00:15:50.369 ] 00:15:50.369 }' 00:15:50.369 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.369 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:50.369 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.369 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.370 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:50.370 04:06:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.370 04:06:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.370 [2024-12-06 04:06:43.569765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:50.370 [2024-12-06 04:06:43.616545] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:50.370 [2024-12-06 04:06:43.616646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.370 [2024-12-06 04:06:43.616666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:50.370 [2024-12-06 04:06:43.616674] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:50.370 04:06:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.370 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:50.370 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.370 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.370 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.370 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.370 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.370 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.370 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.370 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.370 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.370 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.370 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.370 04:06:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.370 04:06:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.370 04:06:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.370 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.370 "name": "raid_bdev1", 00:15:50.370 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:50.370 "strip_size_kb": 0, 00:15:50.370 "state": "online", 00:15:50.370 "raid_level": "raid1", 00:15:50.370 "superblock": true, 00:15:50.370 "num_base_bdevs": 4, 00:15:50.370 "num_base_bdevs_discovered": 2, 00:15:50.370 "num_base_bdevs_operational": 2, 00:15:50.370 "base_bdevs_list": [ 00:15:50.370 { 00:15:50.370 "name": null, 00:15:50.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.370 "is_configured": false, 00:15:50.370 "data_offset": 0, 00:15:50.370 "data_size": 63488 00:15:50.370 }, 00:15:50.370 { 00:15:50.370 "name": null, 00:15:50.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.370 "is_configured": false, 00:15:50.370 "data_offset": 2048, 00:15:50.370 "data_size": 63488 00:15:50.370 }, 00:15:50.370 { 00:15:50.370 "name": "BaseBdev3", 00:15:50.370 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:50.370 "is_configured": true, 00:15:50.370 "data_offset": 2048, 00:15:50.370 "data_size": 63488 00:15:50.370 }, 00:15:50.370 { 00:15:50.370 "name": "BaseBdev4", 00:15:50.370 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:50.370 "is_configured": true, 00:15:50.370 "data_offset": 2048, 00:15:50.370 "data_size": 63488 00:15:50.370 } 00:15:50.370 ] 00:15:50.370 }' 00:15:50.370 04:06:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.370 04:06:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.940 04:06:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:50.940 04:06:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.940 04:06:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.940 [2024-12-06 04:06:44.102040] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:50.940 [2024-12-06 04:06:44.102129] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.940 [2024-12-06 04:06:44.102165] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:50.940 [2024-12-06 04:06:44.102176] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.940 [2024-12-06 04:06:44.102688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.940 [2024-12-06 04:06:44.102718] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:50.940 [2024-12-06 04:06:44.102820] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:50.940 [2024-12-06 04:06:44.102843] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:50.940 [2024-12-06 04:06:44.102860] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:50.940 [2024-12-06 04:06:44.102886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:50.940 [2024-12-06 04:06:44.118152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:15:50.940 spare 00:15:50.940 04:06:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.940 [2024-12-06 04:06:44.120026] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:50.940 04:06:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:51.880 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:51.880 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.880 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:51.880 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:51.880 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.880 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.881 04:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.881 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.881 04:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.881 04:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.881 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.881 "name": "raid_bdev1", 00:15:51.881 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:51.881 "strip_size_kb": 0, 00:15:51.881 "state": "online", 00:15:51.881 "raid_level": "raid1", 00:15:51.881 "superblock": true, 00:15:51.881 "num_base_bdevs": 4, 00:15:51.881 "num_base_bdevs_discovered": 3, 00:15:51.881 "num_base_bdevs_operational": 3, 00:15:51.881 "process": { 00:15:51.881 "type": "rebuild", 00:15:51.881 "target": "spare", 00:15:51.881 "progress": { 00:15:51.881 "blocks": 20480, 00:15:51.881 "percent": 32 00:15:51.881 } 00:15:51.881 }, 00:15:51.881 "base_bdevs_list": [ 00:15:51.881 { 00:15:51.881 "name": "spare", 00:15:51.881 "uuid": "00c24a51-0d33-5d60-b07a-ab9da5daf1e7", 00:15:51.881 "is_configured": true, 00:15:51.881 "data_offset": 2048, 00:15:51.881 "data_size": 63488 00:15:51.881 }, 00:15:51.881 { 00:15:51.881 "name": null, 00:15:51.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.881 "is_configured": false, 00:15:51.881 "data_offset": 2048, 00:15:51.881 "data_size": 63488 00:15:51.881 }, 00:15:51.881 { 00:15:51.881 "name": "BaseBdev3", 00:15:51.881 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:51.881 "is_configured": true, 00:15:51.881 "data_offset": 2048, 00:15:51.881 "data_size": 63488 00:15:51.881 }, 00:15:51.881 { 00:15:51.881 "name": "BaseBdev4", 00:15:51.881 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:51.881 "is_configured": true, 00:15:51.881 "data_offset": 2048, 00:15:51.881 "data_size": 63488 00:15:51.881 } 00:15:51.881 ] 00:15:51.881 }' 00:15:51.881 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.881 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:51.881 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.140 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.140 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:52.140 04:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.140 04:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.140 [2024-12-06 04:06:45.275628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:52.140 [2024-12-06 04:06:45.325233] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:52.140 [2024-12-06 04:06:45.325300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.140 [2024-12-06 04:06:45.325316] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:52.140 [2024-12-06 04:06:45.325325] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:52.140 04:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.140 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:52.140 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.140 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.140 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.140 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.140 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:52.140 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.140 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.140 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.140 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.140 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.140 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.140 04:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.140 04:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.140 04:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.140 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.140 "name": "raid_bdev1", 00:15:52.140 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:52.140 "strip_size_kb": 0, 00:15:52.140 "state": "online", 00:15:52.140 "raid_level": "raid1", 00:15:52.140 "superblock": true, 00:15:52.140 "num_base_bdevs": 4, 00:15:52.140 "num_base_bdevs_discovered": 2, 00:15:52.140 "num_base_bdevs_operational": 2, 00:15:52.140 "base_bdevs_list": [ 00:15:52.140 { 00:15:52.140 "name": null, 00:15:52.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.140 "is_configured": false, 00:15:52.140 "data_offset": 0, 00:15:52.140 "data_size": 63488 00:15:52.140 }, 00:15:52.140 { 00:15:52.140 "name": null, 00:15:52.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.140 "is_configured": false, 00:15:52.140 "data_offset": 2048, 00:15:52.140 "data_size": 63488 00:15:52.140 }, 00:15:52.140 { 00:15:52.140 "name": "BaseBdev3", 00:15:52.140 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:52.140 "is_configured": true, 00:15:52.140 "data_offset": 2048, 00:15:52.140 "data_size": 63488 00:15:52.140 }, 00:15:52.140 { 00:15:52.140 "name": "BaseBdev4", 00:15:52.140 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:52.140 "is_configured": true, 00:15:52.140 "data_offset": 2048, 00:15:52.140 "data_size": 63488 00:15:52.140 } 00:15:52.140 ] 00:15:52.140 }' 00:15:52.140 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.140 04:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.709 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:52.709 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.709 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:52.709 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:52.709 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.709 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.709 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.709 04:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.709 04:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.709 04:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.709 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.709 "name": "raid_bdev1", 00:15:52.709 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:52.709 "strip_size_kb": 0, 00:15:52.709 "state": "online", 00:15:52.709 "raid_level": "raid1", 00:15:52.709 "superblock": true, 00:15:52.709 "num_base_bdevs": 4, 00:15:52.709 "num_base_bdevs_discovered": 2, 00:15:52.709 "num_base_bdevs_operational": 2, 00:15:52.709 "base_bdevs_list": [ 00:15:52.709 { 00:15:52.709 "name": null, 00:15:52.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.709 "is_configured": false, 00:15:52.709 "data_offset": 0, 00:15:52.709 "data_size": 63488 00:15:52.709 }, 00:15:52.709 { 00:15:52.709 "name": null, 00:15:52.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.709 "is_configured": false, 00:15:52.709 "data_offset": 2048, 00:15:52.709 "data_size": 63488 00:15:52.709 }, 00:15:52.709 { 00:15:52.709 "name": "BaseBdev3", 00:15:52.709 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:52.709 "is_configured": true, 00:15:52.709 "data_offset": 2048, 00:15:52.709 "data_size": 63488 00:15:52.709 }, 00:15:52.709 { 00:15:52.709 "name": "BaseBdev4", 00:15:52.709 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:52.709 "is_configured": true, 00:15:52.709 "data_offset": 2048, 00:15:52.709 "data_size": 63488 00:15:52.709 } 00:15:52.709 ] 00:15:52.709 }' 00:15:52.709 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.709 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:52.710 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.710 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:52.710 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:52.710 04:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.710 04:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.710 04:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.710 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:52.710 04:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.710 04:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.710 [2024-12-06 04:06:45.959005] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:52.710 [2024-12-06 04:06:45.959087] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.710 [2024-12-06 04:06:45.959113] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:52.710 [2024-12-06 04:06:45.959124] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.710 [2024-12-06 04:06:45.959631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.710 [2024-12-06 04:06:45.959663] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:52.710 [2024-12-06 04:06:45.959757] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:52.710 [2024-12-06 04:06:45.959782] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:52.710 [2024-12-06 04:06:45.959791] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:52.710 [2024-12-06 04:06:45.959816] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:52.710 BaseBdev1 00:15:52.710 04:06:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.710 04:06:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:53.649 04:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:53.649 04:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.649 04:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.649 04:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.649 04:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.649 04:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:53.649 04:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.649 04:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.649 04:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.649 04:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.649 04:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.649 04:06:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.649 04:06:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.649 04:06:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.649 04:06:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.909 04:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.909 "name": "raid_bdev1", 00:15:53.909 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:53.909 "strip_size_kb": 0, 00:15:53.909 "state": "online", 00:15:53.909 "raid_level": "raid1", 00:15:53.909 "superblock": true, 00:15:53.909 "num_base_bdevs": 4, 00:15:53.909 "num_base_bdevs_discovered": 2, 00:15:53.909 "num_base_bdevs_operational": 2, 00:15:53.909 "base_bdevs_list": [ 00:15:53.909 { 00:15:53.909 "name": null, 00:15:53.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.909 "is_configured": false, 00:15:53.909 "data_offset": 0, 00:15:53.909 "data_size": 63488 00:15:53.909 }, 00:15:53.909 { 00:15:53.909 "name": null, 00:15:53.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.909 "is_configured": false, 00:15:53.909 "data_offset": 2048, 00:15:53.909 "data_size": 63488 00:15:53.909 }, 00:15:53.909 { 00:15:53.909 "name": "BaseBdev3", 00:15:53.909 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:53.909 "is_configured": true, 00:15:53.909 "data_offset": 2048, 00:15:53.909 "data_size": 63488 00:15:53.909 }, 00:15:53.909 { 00:15:53.909 "name": "BaseBdev4", 00:15:53.909 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:53.909 "is_configured": true, 00:15:53.909 "data_offset": 2048, 00:15:53.909 "data_size": 63488 00:15:53.909 } 00:15:53.909 ] 00:15:53.909 }' 00:15:53.909 04:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.909 04:06:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.169 04:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:54.169 04:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.169 04:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:54.169 04:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:54.170 04:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.170 04:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.170 04:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.170 04:06:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.170 04:06:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.170 04:06:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.170 04:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.170 "name": "raid_bdev1", 00:15:54.170 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:54.170 "strip_size_kb": 0, 00:15:54.170 "state": "online", 00:15:54.170 "raid_level": "raid1", 00:15:54.170 "superblock": true, 00:15:54.170 "num_base_bdevs": 4, 00:15:54.170 "num_base_bdevs_discovered": 2, 00:15:54.170 "num_base_bdevs_operational": 2, 00:15:54.170 "base_bdevs_list": [ 00:15:54.170 { 00:15:54.170 "name": null, 00:15:54.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.170 "is_configured": false, 00:15:54.170 "data_offset": 0, 00:15:54.170 "data_size": 63488 00:15:54.170 }, 00:15:54.170 { 00:15:54.170 "name": null, 00:15:54.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.170 "is_configured": false, 00:15:54.170 "data_offset": 2048, 00:15:54.170 "data_size": 63488 00:15:54.170 }, 00:15:54.170 { 00:15:54.170 "name": "BaseBdev3", 00:15:54.170 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:54.170 "is_configured": true, 00:15:54.170 "data_offset": 2048, 00:15:54.170 "data_size": 63488 00:15:54.170 }, 00:15:54.170 { 00:15:54.170 "name": "BaseBdev4", 00:15:54.170 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:54.170 "is_configured": true, 00:15:54.170 "data_offset": 2048, 00:15:54.170 "data_size": 63488 00:15:54.170 } 00:15:54.170 ] 00:15:54.170 }' 00:15:54.170 04:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.430 04:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:54.430 04:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.430 04:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:54.430 04:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:54.430 04:06:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:54.430 04:06:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:54.430 04:06:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:54.430 04:06:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:54.430 04:06:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:54.430 04:06:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:54.430 04:06:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:54.430 04:06:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.430 04:06:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.430 [2024-12-06 04:06:47.588454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.430 [2024-12-06 04:06:47.588690] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:54.430 [2024-12-06 04:06:47.588719] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:54.430 request: 00:15:54.430 { 00:15:54.430 "base_bdev": "BaseBdev1", 00:15:54.430 "raid_bdev": "raid_bdev1", 00:15:54.430 "method": "bdev_raid_add_base_bdev", 00:15:54.430 "req_id": 1 00:15:54.430 } 00:15:54.430 Got JSON-RPC error response 00:15:54.430 response: 00:15:54.430 { 00:15:54.430 "code": -22, 00:15:54.430 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:54.430 } 00:15:54.430 04:06:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:54.430 04:06:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:54.430 04:06:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:54.430 04:06:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:54.430 04:06:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:54.430 04:06:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:55.367 04:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:55.367 04:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.367 04:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.367 04:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.367 04:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.367 04:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.367 04:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.367 04:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.367 04:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.367 04:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.367 04:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.367 04:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.367 04:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.367 04:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.367 04:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.367 04:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.367 "name": "raid_bdev1", 00:15:55.367 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:55.367 "strip_size_kb": 0, 00:15:55.367 "state": "online", 00:15:55.367 "raid_level": "raid1", 00:15:55.367 "superblock": true, 00:15:55.367 "num_base_bdevs": 4, 00:15:55.367 "num_base_bdevs_discovered": 2, 00:15:55.367 "num_base_bdevs_operational": 2, 00:15:55.367 "base_bdevs_list": [ 00:15:55.367 { 00:15:55.367 "name": null, 00:15:55.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.367 "is_configured": false, 00:15:55.367 "data_offset": 0, 00:15:55.367 "data_size": 63488 00:15:55.367 }, 00:15:55.367 { 00:15:55.367 "name": null, 00:15:55.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.367 "is_configured": false, 00:15:55.367 "data_offset": 2048, 00:15:55.367 "data_size": 63488 00:15:55.367 }, 00:15:55.367 { 00:15:55.367 "name": "BaseBdev3", 00:15:55.367 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:55.367 "is_configured": true, 00:15:55.367 "data_offset": 2048, 00:15:55.367 "data_size": 63488 00:15:55.367 }, 00:15:55.367 { 00:15:55.367 "name": "BaseBdev4", 00:15:55.367 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:55.367 "is_configured": true, 00:15:55.367 "data_offset": 2048, 00:15:55.367 "data_size": 63488 00:15:55.367 } 00:15:55.367 ] 00:15:55.367 }' 00:15:55.367 04:06:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.367 04:06:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.936 04:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:55.936 04:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.936 04:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:55.936 04:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:55.936 04:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.936 04:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.936 04:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.936 04:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.936 04:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.936 04:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.936 04:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.936 "name": "raid_bdev1", 00:15:55.936 "uuid": "8348bb07-ebfb-4c4d-a0ed-0eb115c7054b", 00:15:55.936 "strip_size_kb": 0, 00:15:55.936 "state": "online", 00:15:55.936 "raid_level": "raid1", 00:15:55.936 "superblock": true, 00:15:55.936 "num_base_bdevs": 4, 00:15:55.936 "num_base_bdevs_discovered": 2, 00:15:55.936 "num_base_bdevs_operational": 2, 00:15:55.936 "base_bdevs_list": [ 00:15:55.936 { 00:15:55.936 "name": null, 00:15:55.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.936 "is_configured": false, 00:15:55.936 "data_offset": 0, 00:15:55.936 "data_size": 63488 00:15:55.936 }, 00:15:55.936 { 00:15:55.936 "name": null, 00:15:55.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.936 "is_configured": false, 00:15:55.936 "data_offset": 2048, 00:15:55.936 "data_size": 63488 00:15:55.936 }, 00:15:55.936 { 00:15:55.936 "name": "BaseBdev3", 00:15:55.936 "uuid": "1fb3113d-f193-5393-bea3-abc7ddb2e821", 00:15:55.936 "is_configured": true, 00:15:55.936 "data_offset": 2048, 00:15:55.936 "data_size": 63488 00:15:55.936 }, 00:15:55.936 { 00:15:55.936 "name": "BaseBdev4", 00:15:55.936 "uuid": "a3ec3602-77b9-50fe-9045-ab5d2c6a9dba", 00:15:55.936 "is_configured": true, 00:15:55.936 "data_offset": 2048, 00:15:55.936 "data_size": 63488 00:15:55.936 } 00:15:55.936 ] 00:15:55.936 }' 00:15:55.936 04:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.936 04:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:55.936 04:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.936 04:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:55.936 04:06:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78217 00:15:55.936 04:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78217 ']' 00:15:55.936 04:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78217 00:15:55.936 04:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:55.936 04:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.936 04:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78217 00:15:55.936 killing process with pid 78217 00:15:55.936 Received shutdown signal, test time was about 60.000000 seconds 00:15:55.936 00:15:55.936 Latency(us) 00:15:55.936 [2024-12-06T04:06:49.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.936 [2024-12-06T04:06:49.290Z] =================================================================================================================== 00:15:55.936 [2024-12-06T04:06:49.290Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:55.937 04:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:55.937 04:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:55.937 04:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78217' 00:15:55.937 04:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78217 00:15:55.937 04:06:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78217 00:15:55.937 [2024-12-06 04:06:49.224665] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:55.937 [2024-12-06 04:06:49.224798] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.937 [2024-12-06 04:06:49.224881] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:55.937 [2024-12-06 04:06:49.224897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:56.570 [2024-12-06 04:06:49.759020] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:57.959 04:06:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:57.959 00:15:57.959 real 0m26.582s 00:15:57.959 user 0m31.663s 00:15:57.959 sys 0m4.013s 00:15:57.959 04:06:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.959 04:06:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.959 ************************************ 00:15:57.959 END TEST raid_rebuild_test_sb 00:15:57.959 ************************************ 00:15:57.959 04:06:51 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:15:57.959 04:06:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:57.959 04:06:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.959 04:06:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:57.959 ************************************ 00:15:57.959 START TEST raid_rebuild_test_io 00:15:57.959 ************************************ 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78985 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78985 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78985 ']' 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.959 04:06:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.959 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:57.959 Zero copy mechanism will not be used. 00:15:57.959 [2024-12-06 04:06:51.132658] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:15:57.959 [2024-12-06 04:06:51.132789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78985 ] 00:15:57.959 [2024-12-06 04:06:51.300246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.219 [2024-12-06 04:06:51.430539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.479 [2024-12-06 04:06:51.661832] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.479 [2024-12-06 04:06:51.661902] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.737 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.737 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:15:58.737 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:58.737 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:58.737 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.737 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.737 BaseBdev1_malloc 00:15:58.737 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.737 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:58.737 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.737 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.737 [2024-12-06 04:06:52.071593] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:58.737 [2024-12-06 04:06:52.071668] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.737 [2024-12-06 04:06:52.071694] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:58.737 [2024-12-06 04:06:52.071708] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.737 [2024-12-06 04:06:52.074259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.737 [2024-12-06 04:06:52.074308] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:58.737 BaseBdev1 00:15:58.737 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.737 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:58.737 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:58.737 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.737 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.998 BaseBdev2_malloc 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.998 [2024-12-06 04:06:52.131407] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:58.998 [2024-12-06 04:06:52.131488] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.998 [2024-12-06 04:06:52.131517] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:58.998 [2024-12-06 04:06:52.131531] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.998 [2024-12-06 04:06:52.134022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.998 [2024-12-06 04:06:52.134087] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:58.998 BaseBdev2 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.998 BaseBdev3_malloc 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.998 [2024-12-06 04:06:52.200754] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:58.998 [2024-12-06 04:06:52.200826] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.998 [2024-12-06 04:06:52.200856] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:58.998 [2024-12-06 04:06:52.200883] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.998 [2024-12-06 04:06:52.203351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.998 [2024-12-06 04:06:52.203397] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:58.998 BaseBdev3 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.998 BaseBdev4_malloc 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.998 [2024-12-06 04:06:52.262703] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:58.998 [2024-12-06 04:06:52.262780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.998 [2024-12-06 04:06:52.262805] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:58.998 [2024-12-06 04:06:52.262817] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.998 [2024-12-06 04:06:52.265304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.998 [2024-12-06 04:06:52.265363] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:58.998 BaseBdev4 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.998 spare_malloc 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.998 spare_delay 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.998 [2024-12-06 04:06:52.332682] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:58.998 [2024-12-06 04:06:52.332746] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.998 [2024-12-06 04:06:52.332769] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:58.998 [2024-12-06 04:06:52.332784] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.998 [2024-12-06 04:06:52.335134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.998 [2024-12-06 04:06:52.335177] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:58.998 spare 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.998 [2024-12-06 04:06:52.340723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.998 [2024-12-06 04:06:52.342873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:58.998 [2024-12-06 04:06:52.342955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:58.998 [2024-12-06 04:06:52.343029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:58.998 [2024-12-06 04:06:52.343170] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:58.998 [2024-12-06 04:06:52.343197] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:58.998 [2024-12-06 04:06:52.343528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:58.998 [2024-12-06 04:06:52.343772] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:58.998 [2024-12-06 04:06:52.343798] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:58.998 [2024-12-06 04:06:52.343986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.998 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.259 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.259 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.259 "name": "raid_bdev1", 00:15:59.259 "uuid": "53fe43e3-8a42-4b36-8dca-07d70455ac2b", 00:15:59.259 "strip_size_kb": 0, 00:15:59.259 "state": "online", 00:15:59.259 "raid_level": "raid1", 00:15:59.259 "superblock": false, 00:15:59.259 "num_base_bdevs": 4, 00:15:59.259 "num_base_bdevs_discovered": 4, 00:15:59.259 "num_base_bdevs_operational": 4, 00:15:59.259 "base_bdevs_list": [ 00:15:59.259 { 00:15:59.259 "name": "BaseBdev1", 00:15:59.259 "uuid": "f19d9864-ad03-5d3d-a7e6-cf76565ad88c", 00:15:59.259 "is_configured": true, 00:15:59.259 "data_offset": 0, 00:15:59.259 "data_size": 65536 00:15:59.259 }, 00:15:59.259 { 00:15:59.259 "name": "BaseBdev2", 00:15:59.259 "uuid": "3b11309e-3db8-57b0-9b17-4cbd7d358419", 00:15:59.259 "is_configured": true, 00:15:59.259 "data_offset": 0, 00:15:59.259 "data_size": 65536 00:15:59.259 }, 00:15:59.259 { 00:15:59.259 "name": "BaseBdev3", 00:15:59.259 "uuid": "e09b8682-7e50-5a68-9c9f-a43936b3e7b3", 00:15:59.259 "is_configured": true, 00:15:59.259 "data_offset": 0, 00:15:59.259 "data_size": 65536 00:15:59.259 }, 00:15:59.259 { 00:15:59.259 "name": "BaseBdev4", 00:15:59.259 "uuid": "4425e081-495b-5651-9c87-8f45c81e5e02", 00:15:59.259 "is_configured": true, 00:15:59.259 "data_offset": 0, 00:15:59.259 "data_size": 65536 00:15:59.259 } 00:15:59.259 ] 00:15:59.259 }' 00:15:59.259 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.259 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.519 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:59.519 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.519 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.519 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.519 [2024-12-06 04:06:52.716671] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.519 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.519 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:59.519 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:59.519 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.519 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.519 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.519 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.519 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:59.519 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:59.519 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:59.519 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.519 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.519 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:59.519 [2024-12-06 04:06:52.784164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:59.520 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.520 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:59.520 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.520 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.520 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.520 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.520 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:59.520 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.520 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.520 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.520 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.520 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.520 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.520 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.520 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.520 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.520 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.520 "name": "raid_bdev1", 00:15:59.520 "uuid": "53fe43e3-8a42-4b36-8dca-07d70455ac2b", 00:15:59.520 "strip_size_kb": 0, 00:15:59.520 "state": "online", 00:15:59.520 "raid_level": "raid1", 00:15:59.520 "superblock": false, 00:15:59.520 "num_base_bdevs": 4, 00:15:59.520 "num_base_bdevs_discovered": 3, 00:15:59.520 "num_base_bdevs_operational": 3, 00:15:59.520 "base_bdevs_list": [ 00:15:59.520 { 00:15:59.520 "name": null, 00:15:59.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.520 "is_configured": false, 00:15:59.520 "data_offset": 0, 00:15:59.520 "data_size": 65536 00:15:59.520 }, 00:15:59.520 { 00:15:59.520 "name": "BaseBdev2", 00:15:59.520 "uuid": "3b11309e-3db8-57b0-9b17-4cbd7d358419", 00:15:59.520 "is_configured": true, 00:15:59.520 "data_offset": 0, 00:15:59.520 "data_size": 65536 00:15:59.520 }, 00:15:59.520 { 00:15:59.520 "name": "BaseBdev3", 00:15:59.520 "uuid": "e09b8682-7e50-5a68-9c9f-a43936b3e7b3", 00:15:59.520 "is_configured": true, 00:15:59.520 "data_offset": 0, 00:15:59.520 "data_size": 65536 00:15:59.520 }, 00:15:59.520 { 00:15:59.520 "name": "BaseBdev4", 00:15:59.520 "uuid": "4425e081-495b-5651-9c87-8f45c81e5e02", 00:15:59.520 "is_configured": true, 00:15:59.520 "data_offset": 0, 00:15:59.520 "data_size": 65536 00:15:59.520 } 00:15:59.520 ] 00:15:59.520 }' 00:15:59.520 04:06:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.520 04:06:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.780 [2024-12-06 04:06:52.885354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:59.780 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:59.780 Zero copy mechanism will not be used. 00:15:59.780 Running I/O for 60 seconds... 00:16:00.039 04:06:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:00.039 04:06:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.039 04:06:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.039 [2024-12-06 04:06:53.248296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:00.039 04:06:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.039 04:06:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:00.039 [2024-12-06 04:06:53.319581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:00.039 [2024-12-06 04:06:53.321824] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:00.299 [2024-12-06 04:06:53.439792] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:00.299 [2024-12-06 04:06:53.441327] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:00.559 [2024-12-06 04:06:53.676926] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:00.559 [2024-12-06 04:06:53.677717] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:00.819 168.00 IOPS, 504.00 MiB/s [2024-12-06T04:06:54.173Z] [2024-12-06 04:06:54.031445] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:00.819 [2024-12-06 04:06:54.168333] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:01.079 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.079 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.079 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.079 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.079 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.079 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.079 04:06:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.079 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.079 04:06:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.079 04:06:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.079 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.079 "name": "raid_bdev1", 00:16:01.079 "uuid": "53fe43e3-8a42-4b36-8dca-07d70455ac2b", 00:16:01.079 "strip_size_kb": 0, 00:16:01.079 "state": "online", 00:16:01.079 "raid_level": "raid1", 00:16:01.079 "superblock": false, 00:16:01.079 "num_base_bdevs": 4, 00:16:01.079 "num_base_bdevs_discovered": 4, 00:16:01.079 "num_base_bdevs_operational": 4, 00:16:01.079 "process": { 00:16:01.079 "type": "rebuild", 00:16:01.079 "target": "spare", 00:16:01.079 "progress": { 00:16:01.079 "blocks": 12288, 00:16:01.079 "percent": 18 00:16:01.079 } 00:16:01.079 }, 00:16:01.079 "base_bdevs_list": [ 00:16:01.079 { 00:16:01.079 "name": "spare", 00:16:01.079 "uuid": "b9a5310f-70b4-55c9-a432-9c8d93b9e6d4", 00:16:01.079 "is_configured": true, 00:16:01.079 "data_offset": 0, 00:16:01.079 "data_size": 65536 00:16:01.079 }, 00:16:01.079 { 00:16:01.079 "name": "BaseBdev2", 00:16:01.079 "uuid": "3b11309e-3db8-57b0-9b17-4cbd7d358419", 00:16:01.079 "is_configured": true, 00:16:01.079 "data_offset": 0, 00:16:01.079 "data_size": 65536 00:16:01.079 }, 00:16:01.079 { 00:16:01.079 "name": "BaseBdev3", 00:16:01.079 "uuid": "e09b8682-7e50-5a68-9c9f-a43936b3e7b3", 00:16:01.079 "is_configured": true, 00:16:01.079 "data_offset": 0, 00:16:01.079 "data_size": 65536 00:16:01.079 }, 00:16:01.079 { 00:16:01.079 "name": "BaseBdev4", 00:16:01.080 "uuid": "4425e081-495b-5651-9c87-8f45c81e5e02", 00:16:01.080 "is_configured": true, 00:16:01.080 "data_offset": 0, 00:16:01.080 "data_size": 65536 00:16:01.080 } 00:16:01.080 ] 00:16:01.080 }' 00:16:01.080 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.080 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.080 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.340 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.340 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:01.340 04:06:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.340 04:06:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.340 [2024-12-06 04:06:54.455631] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:01.340 [2024-12-06 04:06:54.548331] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:01.340 [2024-12-06 04:06:54.549212] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:01.340 [2024-12-06 04:06:54.657946] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:01.340 [2024-12-06 04:06:54.676086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.340 [2024-12-06 04:06:54.676160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:01.340 [2024-12-06 04:06:54.676177] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:01.688 [2024-12-06 04:06:54.701464] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:16:01.688 04:06:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.688 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:01.688 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.688 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.688 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.688 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.688 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.688 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.688 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.688 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.688 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.688 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.688 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.688 04:06:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.688 04:06:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.688 04:06:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.688 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.688 "name": "raid_bdev1", 00:16:01.688 "uuid": "53fe43e3-8a42-4b36-8dca-07d70455ac2b", 00:16:01.688 "strip_size_kb": 0, 00:16:01.688 "state": "online", 00:16:01.688 "raid_level": "raid1", 00:16:01.688 "superblock": false, 00:16:01.688 "num_base_bdevs": 4, 00:16:01.688 "num_base_bdevs_discovered": 3, 00:16:01.688 "num_base_bdevs_operational": 3, 00:16:01.688 "base_bdevs_list": [ 00:16:01.688 { 00:16:01.688 "name": null, 00:16:01.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.688 "is_configured": false, 00:16:01.688 "data_offset": 0, 00:16:01.688 "data_size": 65536 00:16:01.688 }, 00:16:01.688 { 00:16:01.688 "name": "BaseBdev2", 00:16:01.688 "uuid": "3b11309e-3db8-57b0-9b17-4cbd7d358419", 00:16:01.688 "is_configured": true, 00:16:01.688 "data_offset": 0, 00:16:01.688 "data_size": 65536 00:16:01.688 }, 00:16:01.688 { 00:16:01.688 "name": "BaseBdev3", 00:16:01.688 "uuid": "e09b8682-7e50-5a68-9c9f-a43936b3e7b3", 00:16:01.688 "is_configured": true, 00:16:01.688 "data_offset": 0, 00:16:01.688 "data_size": 65536 00:16:01.688 }, 00:16:01.688 { 00:16:01.688 "name": "BaseBdev4", 00:16:01.688 "uuid": "4425e081-495b-5651-9c87-8f45c81e5e02", 00:16:01.688 "is_configured": true, 00:16:01.688 "data_offset": 0, 00:16:01.688 "data_size": 65536 00:16:01.688 } 00:16:01.688 ] 00:16:01.688 }' 00:16:01.688 04:06:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.688 04:06:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.945 130.50 IOPS, 391.50 MiB/s [2024-12-06T04:06:55.299Z] 04:06:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:01.945 04:06:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.945 04:06:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:01.945 04:06:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:01.945 04:06:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.945 04:06:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.945 04:06:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.945 04:06:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.945 04:06:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.945 04:06:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.945 04:06:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.945 "name": "raid_bdev1", 00:16:01.945 "uuid": "53fe43e3-8a42-4b36-8dca-07d70455ac2b", 00:16:01.945 "strip_size_kb": 0, 00:16:01.945 "state": "online", 00:16:01.945 "raid_level": "raid1", 00:16:01.945 "superblock": false, 00:16:01.945 "num_base_bdevs": 4, 00:16:01.945 "num_base_bdevs_discovered": 3, 00:16:01.945 "num_base_bdevs_operational": 3, 00:16:01.945 "base_bdevs_list": [ 00:16:01.945 { 00:16:01.945 "name": null, 00:16:01.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.945 "is_configured": false, 00:16:01.945 "data_offset": 0, 00:16:01.945 "data_size": 65536 00:16:01.945 }, 00:16:01.945 { 00:16:01.945 "name": "BaseBdev2", 00:16:01.945 "uuid": "3b11309e-3db8-57b0-9b17-4cbd7d358419", 00:16:01.945 "is_configured": true, 00:16:01.945 "data_offset": 0, 00:16:01.945 "data_size": 65536 00:16:01.945 }, 00:16:01.945 { 00:16:01.945 "name": "BaseBdev3", 00:16:01.945 "uuid": "e09b8682-7e50-5a68-9c9f-a43936b3e7b3", 00:16:01.945 "is_configured": true, 00:16:01.945 "data_offset": 0, 00:16:01.945 "data_size": 65536 00:16:01.945 }, 00:16:01.945 { 00:16:01.945 "name": "BaseBdev4", 00:16:01.945 "uuid": "4425e081-495b-5651-9c87-8f45c81e5e02", 00:16:01.945 "is_configured": true, 00:16:01.945 "data_offset": 0, 00:16:01.945 "data_size": 65536 00:16:01.945 } 00:16:01.945 ] 00:16:01.945 }' 00:16:01.945 04:06:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.945 04:06:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:01.945 04:06:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.204 04:06:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:02.204 04:06:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:02.204 04:06:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.204 04:06:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.204 [2024-12-06 04:06:55.317018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:02.204 04:06:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.204 04:06:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:02.204 [2024-12-06 04:06:55.396983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:02.204 [2024-12-06 04:06:55.399428] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:02.204 [2024-12-06 04:06:55.523529] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:02.204 [2024-12-06 04:06:55.524204] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:02.463 [2024-12-06 04:06:55.749671] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:02.463 [2024-12-06 04:06:55.750653] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:02.982 139.67 IOPS, 419.00 MiB/s [2024-12-06T04:06:56.336Z] [2024-12-06 04:06:56.114328] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:02.982 [2024-12-06 04:06:56.115085] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:02.982 [2024-12-06 04:06:56.327280] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:03.241 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.241 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.241 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.241 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.241 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.241 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.241 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.241 04:06:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.241 04:06:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.241 04:06:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.241 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.241 "name": "raid_bdev1", 00:16:03.241 "uuid": "53fe43e3-8a42-4b36-8dca-07d70455ac2b", 00:16:03.241 "strip_size_kb": 0, 00:16:03.241 "state": "online", 00:16:03.241 "raid_level": "raid1", 00:16:03.241 "superblock": false, 00:16:03.241 "num_base_bdevs": 4, 00:16:03.241 "num_base_bdevs_discovered": 4, 00:16:03.241 "num_base_bdevs_operational": 4, 00:16:03.241 "process": { 00:16:03.241 "type": "rebuild", 00:16:03.241 "target": "spare", 00:16:03.241 "progress": { 00:16:03.241 "blocks": 10240, 00:16:03.241 "percent": 15 00:16:03.241 } 00:16:03.241 }, 00:16:03.241 "base_bdevs_list": [ 00:16:03.241 { 00:16:03.242 "name": "spare", 00:16:03.242 "uuid": "b9a5310f-70b4-55c9-a432-9c8d93b9e6d4", 00:16:03.242 "is_configured": true, 00:16:03.242 "data_offset": 0, 00:16:03.242 "data_size": 65536 00:16:03.242 }, 00:16:03.242 { 00:16:03.242 "name": "BaseBdev2", 00:16:03.242 "uuid": "3b11309e-3db8-57b0-9b17-4cbd7d358419", 00:16:03.242 "is_configured": true, 00:16:03.242 "data_offset": 0, 00:16:03.242 "data_size": 65536 00:16:03.242 }, 00:16:03.242 { 00:16:03.242 "name": "BaseBdev3", 00:16:03.242 "uuid": "e09b8682-7e50-5a68-9c9f-a43936b3e7b3", 00:16:03.242 "is_configured": true, 00:16:03.242 "data_offset": 0, 00:16:03.242 "data_size": 65536 00:16:03.242 }, 00:16:03.242 { 00:16:03.242 "name": "BaseBdev4", 00:16:03.242 "uuid": "4425e081-495b-5651-9c87-8f45c81e5e02", 00:16:03.242 "is_configured": true, 00:16:03.242 "data_offset": 0, 00:16:03.242 "data_size": 65536 00:16:03.242 } 00:16:03.242 ] 00:16:03.242 }' 00:16:03.242 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.242 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.242 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.242 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.242 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:03.242 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:03.242 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:03.242 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:03.242 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:03.242 04:06:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.242 04:06:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.242 [2024-12-06 04:06:56.523004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:03.501 [2024-12-06 04:06:56.642389] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:03.501 [2024-12-06 04:06:56.642542] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:03.501 04:06:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.501 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:03.501 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:03.501 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.501 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.501 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.501 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.501 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.501 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.501 04:06:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.501 04:06:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.501 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.501 04:06:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.501 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.501 "name": "raid_bdev1", 00:16:03.501 "uuid": "53fe43e3-8a42-4b36-8dca-07d70455ac2b", 00:16:03.501 "strip_size_kb": 0, 00:16:03.501 "state": "online", 00:16:03.501 "raid_level": "raid1", 00:16:03.501 "superblock": false, 00:16:03.501 "num_base_bdevs": 4, 00:16:03.501 "num_base_bdevs_discovered": 3, 00:16:03.501 "num_base_bdevs_operational": 3, 00:16:03.501 "process": { 00:16:03.501 "type": "rebuild", 00:16:03.501 "target": "spare", 00:16:03.501 "progress": { 00:16:03.501 "blocks": 12288, 00:16:03.501 "percent": 18 00:16:03.501 } 00:16:03.501 }, 00:16:03.501 "base_bdevs_list": [ 00:16:03.501 { 00:16:03.501 "name": "spare", 00:16:03.501 "uuid": "b9a5310f-70b4-55c9-a432-9c8d93b9e6d4", 00:16:03.501 "is_configured": true, 00:16:03.501 "data_offset": 0, 00:16:03.501 "data_size": 65536 00:16:03.501 }, 00:16:03.501 { 00:16:03.501 "name": null, 00:16:03.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.501 "is_configured": false, 00:16:03.501 "data_offset": 0, 00:16:03.501 "data_size": 65536 00:16:03.501 }, 00:16:03.501 { 00:16:03.501 "name": "BaseBdev3", 00:16:03.501 "uuid": "e09b8682-7e50-5a68-9c9f-a43936b3e7b3", 00:16:03.501 "is_configured": true, 00:16:03.501 "data_offset": 0, 00:16:03.501 "data_size": 65536 00:16:03.501 }, 00:16:03.501 { 00:16:03.501 "name": "BaseBdev4", 00:16:03.501 "uuid": "4425e081-495b-5651-9c87-8f45c81e5e02", 00:16:03.501 "is_configured": true, 00:16:03.501 "data_offset": 0, 00:16:03.501 "data_size": 65536 00:16:03.501 } 00:16:03.501 ] 00:16:03.501 }' 00:16:03.501 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.501 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.501 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.501 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.502 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=494 00:16:03.502 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:03.502 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.502 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.502 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.502 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.502 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.502 [2024-12-06 04:06:56.782594] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:03.502 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.502 04:06:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.502 04:06:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.502 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.502 04:06:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.502 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.502 "name": "raid_bdev1", 00:16:03.502 "uuid": "53fe43e3-8a42-4b36-8dca-07d70455ac2b", 00:16:03.502 "strip_size_kb": 0, 00:16:03.502 "state": "online", 00:16:03.502 "raid_level": "raid1", 00:16:03.502 "superblock": false, 00:16:03.502 "num_base_bdevs": 4, 00:16:03.502 "num_base_bdevs_discovered": 3, 00:16:03.502 "num_base_bdevs_operational": 3, 00:16:03.502 "process": { 00:16:03.502 "type": "rebuild", 00:16:03.502 "target": "spare", 00:16:03.502 "progress": { 00:16:03.502 "blocks": 14336, 00:16:03.502 "percent": 21 00:16:03.502 } 00:16:03.502 }, 00:16:03.502 "base_bdevs_list": [ 00:16:03.502 { 00:16:03.502 "name": "spare", 00:16:03.502 "uuid": "b9a5310f-70b4-55c9-a432-9c8d93b9e6d4", 00:16:03.502 "is_configured": true, 00:16:03.502 "data_offset": 0, 00:16:03.502 "data_size": 65536 00:16:03.502 }, 00:16:03.502 { 00:16:03.502 "name": null, 00:16:03.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.502 "is_configured": false, 00:16:03.502 "data_offset": 0, 00:16:03.502 "data_size": 65536 00:16:03.502 }, 00:16:03.502 { 00:16:03.502 "name": "BaseBdev3", 00:16:03.502 "uuid": "e09b8682-7e50-5a68-9c9f-a43936b3e7b3", 00:16:03.502 "is_configured": true, 00:16:03.502 "data_offset": 0, 00:16:03.502 "data_size": 65536 00:16:03.502 }, 00:16:03.502 { 00:16:03.502 "name": "BaseBdev4", 00:16:03.502 "uuid": "4425e081-495b-5651-9c87-8f45c81e5e02", 00:16:03.502 "is_configured": true, 00:16:03.502 "data_offset": 0, 00:16:03.502 "data_size": 65536 00:16:03.502 } 00:16:03.502 ] 00:16:03.502 }' 00:16:03.502 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.760 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.760 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.760 121.00 IOPS, 363.00 MiB/s [2024-12-06T04:06:57.114Z] 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.760 04:06:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:03.760 [2024-12-06 04:06:57.010665] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:04.324 [2024-12-06 04:06:57.388750] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:04.324 [2024-12-06 04:06:57.616605] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:04.580 108.00 IOPS, 324.00 MiB/s [2024-12-06T04:06:57.934Z] [2024-12-06 04:06:57.929839] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:04.580 04:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.839 04:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.839 04:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.839 04:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.839 04:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.839 04:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.839 04:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.839 04:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.839 04:06:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.839 04:06:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.839 04:06:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.839 04:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.839 "name": "raid_bdev1", 00:16:04.839 "uuid": "53fe43e3-8a42-4b36-8dca-07d70455ac2b", 00:16:04.839 "strip_size_kb": 0, 00:16:04.839 "state": "online", 00:16:04.839 "raid_level": "raid1", 00:16:04.839 "superblock": false, 00:16:04.839 "num_base_bdevs": 4, 00:16:04.839 "num_base_bdevs_discovered": 3, 00:16:04.839 "num_base_bdevs_operational": 3, 00:16:04.839 "process": { 00:16:04.839 "type": "rebuild", 00:16:04.839 "target": "spare", 00:16:04.839 "progress": { 00:16:04.839 "blocks": 26624, 00:16:04.839 "percent": 40 00:16:04.839 } 00:16:04.839 }, 00:16:04.839 "base_bdevs_list": [ 00:16:04.839 { 00:16:04.839 "name": "spare", 00:16:04.839 "uuid": "b9a5310f-70b4-55c9-a432-9c8d93b9e6d4", 00:16:04.839 "is_configured": true, 00:16:04.839 "data_offset": 0, 00:16:04.839 "data_size": 65536 00:16:04.839 }, 00:16:04.839 { 00:16:04.839 "name": null, 00:16:04.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.839 "is_configured": false, 00:16:04.839 "data_offset": 0, 00:16:04.839 "data_size": 65536 00:16:04.839 }, 00:16:04.839 { 00:16:04.839 "name": "BaseBdev3", 00:16:04.839 "uuid": "e09b8682-7e50-5a68-9c9f-a43936b3e7b3", 00:16:04.839 "is_configured": true, 00:16:04.839 "data_offset": 0, 00:16:04.839 "data_size": 65536 00:16:04.839 }, 00:16:04.839 { 00:16:04.839 "name": "BaseBdev4", 00:16:04.839 "uuid": "4425e081-495b-5651-9c87-8f45c81e5e02", 00:16:04.839 "is_configured": true, 00:16:04.839 "data_offset": 0, 00:16:04.839 "data_size": 65536 00:16:04.839 } 00:16:04.839 ] 00:16:04.839 }' 00:16:04.839 04:06:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.839 04:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.839 04:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.839 04:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.839 04:06:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:05.096 [2024-12-06 04:06:58.262357] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:16:05.664 [2024-12-06 04:06:58.754660] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:05.922 96.67 IOPS, 290.00 MiB/s [2024-12-06T04:06:59.276Z] 04:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:05.922 04:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.922 04:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.922 04:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.922 04:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.922 04:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.922 04:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.922 04:06:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.922 04:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.922 04:06:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.922 [2024-12-06 04:06:59.107554] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:16:05.922 04:06:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.922 04:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.922 "name": "raid_bdev1", 00:16:05.922 "uuid": "53fe43e3-8a42-4b36-8dca-07d70455ac2b", 00:16:05.922 "strip_size_kb": 0, 00:16:05.922 "state": "online", 00:16:05.922 "raid_level": "raid1", 00:16:05.922 "superblock": false, 00:16:05.922 "num_base_bdevs": 4, 00:16:05.922 "num_base_bdevs_discovered": 3, 00:16:05.922 "num_base_bdevs_operational": 3, 00:16:05.922 "process": { 00:16:05.922 "type": "rebuild", 00:16:05.922 "target": "spare", 00:16:05.922 "progress": { 00:16:05.922 "blocks": 43008, 00:16:05.922 "percent": 65 00:16:05.922 } 00:16:05.922 }, 00:16:05.922 "base_bdevs_list": [ 00:16:05.922 { 00:16:05.922 "name": "spare", 00:16:05.922 "uuid": "b9a5310f-70b4-55c9-a432-9c8d93b9e6d4", 00:16:05.922 "is_configured": true, 00:16:05.922 "data_offset": 0, 00:16:05.922 "data_size": 65536 00:16:05.922 }, 00:16:05.922 { 00:16:05.922 "name": null, 00:16:05.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.922 "is_configured": false, 00:16:05.922 "data_offset": 0, 00:16:05.922 "data_size": 65536 00:16:05.922 }, 00:16:05.922 { 00:16:05.922 "name": "BaseBdev3", 00:16:05.922 "uuid": "e09b8682-7e50-5a68-9c9f-a43936b3e7b3", 00:16:05.922 "is_configured": true, 00:16:05.922 "data_offset": 0, 00:16:05.923 "data_size": 65536 00:16:05.923 }, 00:16:05.923 { 00:16:05.923 "name": "BaseBdev4", 00:16:05.923 "uuid": "4425e081-495b-5651-9c87-8f45c81e5e02", 00:16:05.923 "is_configured": true, 00:16:05.923 "data_offset": 0, 00:16:05.923 "data_size": 65536 00:16:05.923 } 00:16:05.923 ] 00:16:05.923 }' 00:16:05.923 04:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.923 04:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.923 04:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.923 04:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.923 04:06:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:06.180 [2024-12-06 04:06:59.326338] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:06.438 [2024-12-06 04:06:59.567384] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:16:06.438 [2024-12-06 04:06:59.785527] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:16:06.954 87.71 IOPS, 263.14 MiB/s [2024-12-06T04:07:00.308Z] [2024-12-06 04:07:00.131705] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:16:06.954 04:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:06.954 04:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.954 04:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.954 04:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.954 04:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.954 04:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.954 04:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.954 04:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.954 04:07:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.954 04:07:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.954 04:07:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.954 04:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.954 "name": "raid_bdev1", 00:16:06.954 "uuid": "53fe43e3-8a42-4b36-8dca-07d70455ac2b", 00:16:06.954 "strip_size_kb": 0, 00:16:06.954 "state": "online", 00:16:06.954 "raid_level": "raid1", 00:16:06.954 "superblock": false, 00:16:06.954 "num_base_bdevs": 4, 00:16:06.954 "num_base_bdevs_discovered": 3, 00:16:06.954 "num_base_bdevs_operational": 3, 00:16:06.954 "process": { 00:16:06.954 "type": "rebuild", 00:16:06.954 "target": "spare", 00:16:06.954 "progress": { 00:16:06.954 "blocks": 59392, 00:16:06.954 "percent": 90 00:16:06.954 } 00:16:06.954 }, 00:16:06.954 "base_bdevs_list": [ 00:16:06.954 { 00:16:06.954 "name": "spare", 00:16:06.954 "uuid": "b9a5310f-70b4-55c9-a432-9c8d93b9e6d4", 00:16:06.954 "is_configured": true, 00:16:06.954 "data_offset": 0, 00:16:06.954 "data_size": 65536 00:16:06.954 }, 00:16:06.954 { 00:16:06.954 "name": null, 00:16:06.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.954 "is_configured": false, 00:16:06.954 "data_offset": 0, 00:16:06.954 "data_size": 65536 00:16:06.954 }, 00:16:06.954 { 00:16:06.954 "name": "BaseBdev3", 00:16:06.954 "uuid": "e09b8682-7e50-5a68-9c9f-a43936b3e7b3", 00:16:06.954 "is_configured": true, 00:16:06.954 "data_offset": 0, 00:16:06.954 "data_size": 65536 00:16:06.954 }, 00:16:06.954 { 00:16:06.954 "name": "BaseBdev4", 00:16:06.954 "uuid": "4425e081-495b-5651-9c87-8f45c81e5e02", 00:16:06.954 "is_configured": true, 00:16:06.954 "data_offset": 0, 00:16:06.954 "data_size": 65536 00:16:06.954 } 00:16:06.954 ] 00:16:06.954 }' 00:16:06.954 04:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.212 04:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.212 04:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.212 04:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.212 04:07:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:07.470 [2024-12-06 04:07:00.565827] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:07.470 [2024-12-06 04:07:00.665682] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:07.470 [2024-12-06 04:07:00.668109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.301 81.50 IOPS, 244.50 MiB/s [2024-12-06T04:07:01.655Z] 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.301 "name": "raid_bdev1", 00:16:08.301 "uuid": "53fe43e3-8a42-4b36-8dca-07d70455ac2b", 00:16:08.301 "strip_size_kb": 0, 00:16:08.301 "state": "online", 00:16:08.301 "raid_level": "raid1", 00:16:08.301 "superblock": false, 00:16:08.301 "num_base_bdevs": 4, 00:16:08.301 "num_base_bdevs_discovered": 3, 00:16:08.301 "num_base_bdevs_operational": 3, 00:16:08.301 "base_bdevs_list": [ 00:16:08.301 { 00:16:08.301 "name": "spare", 00:16:08.301 "uuid": "b9a5310f-70b4-55c9-a432-9c8d93b9e6d4", 00:16:08.301 "is_configured": true, 00:16:08.301 "data_offset": 0, 00:16:08.301 "data_size": 65536 00:16:08.301 }, 00:16:08.301 { 00:16:08.301 "name": null, 00:16:08.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.301 "is_configured": false, 00:16:08.301 "data_offset": 0, 00:16:08.301 "data_size": 65536 00:16:08.301 }, 00:16:08.301 { 00:16:08.301 "name": "BaseBdev3", 00:16:08.301 "uuid": "e09b8682-7e50-5a68-9c9f-a43936b3e7b3", 00:16:08.301 "is_configured": true, 00:16:08.301 "data_offset": 0, 00:16:08.301 "data_size": 65536 00:16:08.301 }, 00:16:08.301 { 00:16:08.301 "name": "BaseBdev4", 00:16:08.301 "uuid": "4425e081-495b-5651-9c87-8f45c81e5e02", 00:16:08.301 "is_configured": true, 00:16:08.301 "data_offset": 0, 00:16:08.301 "data_size": 65536 00:16:08.301 } 00:16:08.301 ] 00:16:08.301 }' 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.301 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.301 "name": "raid_bdev1", 00:16:08.301 "uuid": "53fe43e3-8a42-4b36-8dca-07d70455ac2b", 00:16:08.301 "strip_size_kb": 0, 00:16:08.301 "state": "online", 00:16:08.301 "raid_level": "raid1", 00:16:08.301 "superblock": false, 00:16:08.301 "num_base_bdevs": 4, 00:16:08.301 "num_base_bdevs_discovered": 3, 00:16:08.301 "num_base_bdevs_operational": 3, 00:16:08.301 "base_bdevs_list": [ 00:16:08.301 { 00:16:08.301 "name": "spare", 00:16:08.301 "uuid": "b9a5310f-70b4-55c9-a432-9c8d93b9e6d4", 00:16:08.301 "is_configured": true, 00:16:08.301 "data_offset": 0, 00:16:08.301 "data_size": 65536 00:16:08.301 }, 00:16:08.301 { 00:16:08.301 "name": null, 00:16:08.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.301 "is_configured": false, 00:16:08.301 "data_offset": 0, 00:16:08.301 "data_size": 65536 00:16:08.301 }, 00:16:08.302 { 00:16:08.302 "name": "BaseBdev3", 00:16:08.302 "uuid": "e09b8682-7e50-5a68-9c9f-a43936b3e7b3", 00:16:08.302 "is_configured": true, 00:16:08.302 "data_offset": 0, 00:16:08.302 "data_size": 65536 00:16:08.302 }, 00:16:08.302 { 00:16:08.302 "name": "BaseBdev4", 00:16:08.302 "uuid": "4425e081-495b-5651-9c87-8f45c81e5e02", 00:16:08.302 "is_configured": true, 00:16:08.302 "data_offset": 0, 00:16:08.302 "data_size": 65536 00:16:08.302 } 00:16:08.302 ] 00:16:08.302 }' 00:16:08.302 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.302 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:08.302 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.586 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:08.586 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:08.586 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.586 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.586 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.586 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.586 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:08.586 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.586 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.586 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.586 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.586 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.586 04:07:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.586 04:07:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.586 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.586 04:07:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.586 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.586 "name": "raid_bdev1", 00:16:08.586 "uuid": "53fe43e3-8a42-4b36-8dca-07d70455ac2b", 00:16:08.586 "strip_size_kb": 0, 00:16:08.586 "state": "online", 00:16:08.586 "raid_level": "raid1", 00:16:08.586 "superblock": false, 00:16:08.586 "num_base_bdevs": 4, 00:16:08.586 "num_base_bdevs_discovered": 3, 00:16:08.586 "num_base_bdevs_operational": 3, 00:16:08.586 "base_bdevs_list": [ 00:16:08.586 { 00:16:08.586 "name": "spare", 00:16:08.586 "uuid": "b9a5310f-70b4-55c9-a432-9c8d93b9e6d4", 00:16:08.586 "is_configured": true, 00:16:08.586 "data_offset": 0, 00:16:08.586 "data_size": 65536 00:16:08.586 }, 00:16:08.586 { 00:16:08.586 "name": null, 00:16:08.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.586 "is_configured": false, 00:16:08.586 "data_offset": 0, 00:16:08.586 "data_size": 65536 00:16:08.586 }, 00:16:08.586 { 00:16:08.586 "name": "BaseBdev3", 00:16:08.586 "uuid": "e09b8682-7e50-5a68-9c9f-a43936b3e7b3", 00:16:08.586 "is_configured": true, 00:16:08.586 "data_offset": 0, 00:16:08.586 "data_size": 65536 00:16:08.586 }, 00:16:08.586 { 00:16:08.586 "name": "BaseBdev4", 00:16:08.586 "uuid": "4425e081-495b-5651-9c87-8f45c81e5e02", 00:16:08.586 "is_configured": true, 00:16:08.586 "data_offset": 0, 00:16:08.586 "data_size": 65536 00:16:08.586 } 00:16:08.586 ] 00:16:08.586 }' 00:16:08.586 04:07:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.586 04:07:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.849 75.67 IOPS, 227.00 MiB/s [2024-12-06T04:07:02.203Z] 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:08.849 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.849 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.849 [2024-12-06 04:07:02.141661] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:08.849 [2024-12-06 04:07:02.141785] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:09.109 00:16:09.109 Latency(us) 00:16:09.109 [2024-12-06T04:07:02.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.109 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:09.110 raid_bdev1 : 9.35 74.13 222.40 0.00 0.00 20032.00 309.44 119052.30 00:16:09.110 [2024-12-06T04:07:02.464Z] =================================================================================================================== 00:16:09.110 [2024-12-06T04:07:02.464Z] Total : 74.13 222.40 0.00 0.00 20032.00 309.44 119052.30 00:16:09.110 { 00:16:09.110 "results": [ 00:16:09.110 { 00:16:09.110 "job": "raid_bdev1", 00:16:09.110 "core_mask": "0x1", 00:16:09.110 "workload": "randrw", 00:16:09.110 "percentage": 50, 00:16:09.110 "status": "finished", 00:16:09.110 "queue_depth": 2, 00:16:09.110 "io_size": 3145728, 00:16:09.110 "runtime": 9.347916, 00:16:09.110 "iops": 74.13417065365158, 00:16:09.110 "mibps": 222.40251196095474, 00:16:09.110 "io_failed": 0, 00:16:09.110 "io_timeout": 0, 00:16:09.110 "avg_latency_us": 20032.004758754105, 00:16:09.110 "min_latency_us": 309.435807860262, 00:16:09.110 "max_latency_us": 119052.29694323144 00:16:09.110 } 00:16:09.110 ], 00:16:09.110 "core_count": 1 00:16:09.110 } 00:16:09.110 [2024-12-06 04:07:02.242479] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.110 [2024-12-06 04:07:02.242564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.110 [2024-12-06 04:07:02.242674] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:09.110 [2024-12-06 04:07:02.242685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:09.110 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.110 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:09.110 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.110 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.110 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.110 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.110 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:09.110 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:09.110 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:09.110 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:09.110 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:09.110 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:09.110 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:09.110 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:09.110 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:09.110 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:09.110 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:09.110 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:09.110 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:09.370 /dev/nbd0 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:09.370 1+0 records in 00:16:09.370 1+0 records out 00:16:09.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363958 s, 11.3 MB/s 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:09.370 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:09.631 /dev/nbd1 00:16:09.631 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:09.631 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:09.631 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:09.631 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:09.631 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:09.631 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:09.631 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:09.631 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:09.631 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:09.631 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:09.631 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:09.631 1+0 records in 00:16:09.631 1+0 records out 00:16:09.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552814 s, 7.4 MB/s 00:16:09.631 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.631 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:09.631 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.631 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:09.631 04:07:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:09.631 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:09.631 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:09.631 04:07:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:09.891 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:09.891 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:09.891 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:09.891 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:09.891 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:09.891 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:09.891 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:10.150 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:10.150 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:10.150 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:10.150 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:10.150 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:10.150 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:10.150 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:10.150 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:10.150 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:10.150 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:10.150 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:10.150 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:10.150 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:10.150 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:10.150 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:10.150 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:10.150 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:10.150 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:10.150 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:10.150 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:10.409 /dev/nbd1 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:10.409 1+0 records in 00:16:10.409 1+0 records out 00:16:10.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395661 s, 10.4 MB/s 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:10.409 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:10.669 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:10.669 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:10.669 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:10.669 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:10.669 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:10.669 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:10.669 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:10.669 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:10.669 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:10.669 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:10.669 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:10.669 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:10.669 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:10.669 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:10.669 04:07:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:10.928 04:07:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:10.929 04:07:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:10.929 04:07:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:10.929 04:07:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:10.929 04:07:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:10.929 04:07:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:10.929 04:07:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:10.929 04:07:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:10.929 04:07:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:10.929 04:07:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78985 00:16:10.929 04:07:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78985 ']' 00:16:10.929 04:07:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78985 00:16:10.929 04:07:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:16:10.929 04:07:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:10.929 04:07:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78985 00:16:10.929 killing process with pid 78985 00:16:10.929 Received shutdown signal, test time was about 11.356578 seconds 00:16:10.929 00:16:10.929 Latency(us) 00:16:10.929 [2024-12-06T04:07:04.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.929 [2024-12-06T04:07:04.283Z] =================================================================================================================== 00:16:10.929 [2024-12-06T04:07:04.283Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:10.929 04:07:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:10.929 04:07:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:10.929 04:07:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78985' 00:16:10.929 04:07:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78985 00:16:10.929 04:07:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78985 00:16:10.929 [2024-12-06 04:07:04.222247] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:11.496 [2024-12-06 04:07:04.739196] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:12.873 00:16:12.873 real 0m14.984s 00:16:12.873 user 0m18.647s 00:16:12.873 sys 0m1.787s 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.873 ************************************ 00:16:12.873 END TEST raid_rebuild_test_io 00:16:12.873 ************************************ 00:16:12.873 04:07:06 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:16:12.873 04:07:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:12.873 04:07:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.873 04:07:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:12.873 ************************************ 00:16:12.873 START TEST raid_rebuild_test_sb_io 00:16:12.873 ************************************ 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:12.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79424 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79424 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79424 ']' 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.873 04:07:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.873 [2024-12-06 04:07:06.171213] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:16:12.873 [2024-12-06 04:07:06.171407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:12.873 Zero copy mechanism will not be used. 00:16:12.873 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79424 ] 00:16:13.132 [2024-12-06 04:07:06.328829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.132 [2024-12-06 04:07:06.444772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.390 [2024-12-06 04:07:06.656889] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.390 [2024-12-06 04:07:06.657008] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.958 BaseBdev1_malloc 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.958 [2024-12-06 04:07:07.071382] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:13.958 [2024-12-06 04:07:07.071474] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.958 [2024-12-06 04:07:07.071501] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:13.958 [2024-12-06 04:07:07.071512] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.958 [2024-12-06 04:07:07.073664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.958 [2024-12-06 04:07:07.073790] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:13.958 BaseBdev1 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.958 BaseBdev2_malloc 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.958 [2024-12-06 04:07:07.126412] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:13.958 [2024-12-06 04:07:07.126482] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.958 [2024-12-06 04:07:07.126506] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:13.958 [2024-12-06 04:07:07.126517] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.958 [2024-12-06 04:07:07.128654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.958 [2024-12-06 04:07:07.128691] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:13.958 BaseBdev2 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.958 BaseBdev3_malloc 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.958 [2024-12-06 04:07:07.195212] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:13.958 [2024-12-06 04:07:07.195270] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.958 [2024-12-06 04:07:07.195292] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:13.958 [2024-12-06 04:07:07.195302] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.958 [2024-12-06 04:07:07.197360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.958 [2024-12-06 04:07:07.197401] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:13.958 BaseBdev3 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.958 BaseBdev4_malloc 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.958 [2024-12-06 04:07:07.248969] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:13.958 [2024-12-06 04:07:07.249039] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.958 [2024-12-06 04:07:07.249079] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:13.958 [2024-12-06 04:07:07.249092] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.958 [2024-12-06 04:07:07.251307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.958 [2024-12-06 04:07:07.251348] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:13.958 BaseBdev4 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.958 spare_malloc 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.958 spare_delay 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.958 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.218 [2024-12-06 04:07:07.315743] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:14.218 [2024-12-06 04:07:07.315804] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.218 [2024-12-06 04:07:07.315823] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:14.218 [2024-12-06 04:07:07.315834] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.218 [2024-12-06 04:07:07.318120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.218 [2024-12-06 04:07:07.318163] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:14.218 spare 00:16:14.218 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.218 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:14.218 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.218 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.218 [2024-12-06 04:07:07.327794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:14.218 [2024-12-06 04:07:07.329911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:14.218 [2024-12-06 04:07:07.330064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:14.218 [2024-12-06 04:07:07.330135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:14.218 [2024-12-06 04:07:07.330345] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:14.218 [2024-12-06 04:07:07.330364] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:14.218 [2024-12-06 04:07:07.330664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:14.218 [2024-12-06 04:07:07.330876] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:14.218 [2024-12-06 04:07:07.330888] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:14.218 [2024-12-06 04:07:07.331077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.218 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.218 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:14.218 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.218 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.218 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.218 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.218 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.218 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.218 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.218 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.218 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.218 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.218 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.218 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.218 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.218 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.218 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.218 "name": "raid_bdev1", 00:16:14.218 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:14.218 "strip_size_kb": 0, 00:16:14.218 "state": "online", 00:16:14.218 "raid_level": "raid1", 00:16:14.218 "superblock": true, 00:16:14.218 "num_base_bdevs": 4, 00:16:14.218 "num_base_bdevs_discovered": 4, 00:16:14.218 "num_base_bdevs_operational": 4, 00:16:14.218 "base_bdevs_list": [ 00:16:14.218 { 00:16:14.218 "name": "BaseBdev1", 00:16:14.218 "uuid": "3eb917dd-a0a6-5295-a5a1-a98a12c97059", 00:16:14.218 "is_configured": true, 00:16:14.218 "data_offset": 2048, 00:16:14.218 "data_size": 63488 00:16:14.218 }, 00:16:14.218 { 00:16:14.218 "name": "BaseBdev2", 00:16:14.218 "uuid": "dc5c58f1-a0ed-544f-bfe0-ddfe4389ae48", 00:16:14.218 "is_configured": true, 00:16:14.218 "data_offset": 2048, 00:16:14.218 "data_size": 63488 00:16:14.218 }, 00:16:14.218 { 00:16:14.218 "name": "BaseBdev3", 00:16:14.218 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:14.218 "is_configured": true, 00:16:14.218 "data_offset": 2048, 00:16:14.218 "data_size": 63488 00:16:14.218 }, 00:16:14.218 { 00:16:14.218 "name": "BaseBdev4", 00:16:14.218 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:14.218 "is_configured": true, 00:16:14.218 "data_offset": 2048, 00:16:14.218 "data_size": 63488 00:16:14.218 } 00:16:14.218 ] 00:16:14.218 }' 00:16:14.218 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.218 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.552 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:14.552 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:14.552 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.552 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.552 [2024-12-06 04:07:07.783478] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:14.552 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.552 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:14.552 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.552 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.552 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.552 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:14.552 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.827 [2024-12-06 04:07:07.878861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.827 "name": "raid_bdev1", 00:16:14.827 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:14.827 "strip_size_kb": 0, 00:16:14.827 "state": "online", 00:16:14.827 "raid_level": "raid1", 00:16:14.827 "superblock": true, 00:16:14.827 "num_base_bdevs": 4, 00:16:14.827 "num_base_bdevs_discovered": 3, 00:16:14.827 "num_base_bdevs_operational": 3, 00:16:14.827 "base_bdevs_list": [ 00:16:14.827 { 00:16:14.827 "name": null, 00:16:14.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.827 "is_configured": false, 00:16:14.827 "data_offset": 0, 00:16:14.827 "data_size": 63488 00:16:14.827 }, 00:16:14.827 { 00:16:14.827 "name": "BaseBdev2", 00:16:14.827 "uuid": "dc5c58f1-a0ed-544f-bfe0-ddfe4389ae48", 00:16:14.827 "is_configured": true, 00:16:14.827 "data_offset": 2048, 00:16:14.827 "data_size": 63488 00:16:14.827 }, 00:16:14.827 { 00:16:14.827 "name": "BaseBdev3", 00:16:14.827 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:14.827 "is_configured": true, 00:16:14.827 "data_offset": 2048, 00:16:14.827 "data_size": 63488 00:16:14.827 }, 00:16:14.827 { 00:16:14.827 "name": "BaseBdev4", 00:16:14.827 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:14.827 "is_configured": true, 00:16:14.827 "data_offset": 2048, 00:16:14.827 "data_size": 63488 00:16:14.827 } 00:16:14.827 ] 00:16:14.827 }' 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.827 04:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.827 [2024-12-06 04:07:07.992053] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:14.827 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:14.827 Zero copy mechanism will not be used. 00:16:14.827 Running I/O for 60 seconds... 00:16:15.088 04:07:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:15.088 04:07:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.088 04:07:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.088 [2024-12-06 04:07:08.302449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:15.088 04:07:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.088 04:07:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:15.088 [2024-12-06 04:07:08.358875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:15.088 [2024-12-06 04:07:08.361061] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:15.348 [2024-12-06 04:07:08.479910] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:15.348 [2024-12-06 04:07:08.481533] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:15.608 [2024-12-06 04:07:08.707783] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:15.608 [2024-12-06 04:07:08.941529] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:15.608 [2024-12-06 04:07:08.942296] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:15.867 162.00 IOPS, 486.00 MiB/s [2024-12-06T04:07:09.221Z] [2024-12-06 04:07:09.064201] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:15.867 [2024-12-06 04:07:09.064685] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:16.127 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.127 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.127 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.127 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.127 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.127 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.127 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.127 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.127 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.127 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.127 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.127 "name": "raid_bdev1", 00:16:16.127 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:16.127 "strip_size_kb": 0, 00:16:16.127 "state": "online", 00:16:16.127 "raid_level": "raid1", 00:16:16.127 "superblock": true, 00:16:16.127 "num_base_bdevs": 4, 00:16:16.127 "num_base_bdevs_discovered": 4, 00:16:16.127 "num_base_bdevs_operational": 4, 00:16:16.127 "process": { 00:16:16.127 "type": "rebuild", 00:16:16.127 "target": "spare", 00:16:16.127 "progress": { 00:16:16.127 "blocks": 14336, 00:16:16.127 "percent": 22 00:16:16.127 } 00:16:16.127 }, 00:16:16.127 "base_bdevs_list": [ 00:16:16.127 { 00:16:16.127 "name": "spare", 00:16:16.127 "uuid": "5839e026-dba2-5cf9-a2ff-d5da9aaf4211", 00:16:16.127 "is_configured": true, 00:16:16.127 "data_offset": 2048, 00:16:16.127 "data_size": 63488 00:16:16.127 }, 00:16:16.127 { 00:16:16.127 "name": "BaseBdev2", 00:16:16.127 "uuid": "dc5c58f1-a0ed-544f-bfe0-ddfe4389ae48", 00:16:16.127 "is_configured": true, 00:16:16.127 "data_offset": 2048, 00:16:16.127 "data_size": 63488 00:16:16.127 }, 00:16:16.127 { 00:16:16.127 "name": "BaseBdev3", 00:16:16.127 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:16.127 "is_configured": true, 00:16:16.127 "data_offset": 2048, 00:16:16.127 "data_size": 63488 00:16:16.127 }, 00:16:16.127 { 00:16:16.127 "name": "BaseBdev4", 00:16:16.127 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:16.127 "is_configured": true, 00:16:16.127 "data_offset": 2048, 00:16:16.127 "data_size": 63488 00:16:16.128 } 00:16:16.128 ] 00:16:16.128 }' 00:16:16.128 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.128 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.128 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.128 [2024-12-06 04:07:09.440258] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:16.128 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.128 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:16.128 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.128 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.128 [2024-12-06 04:07:09.459203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:16.387 [2024-12-06 04:07:09.552393] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:16.388 [2024-12-06 04:07:09.663139] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:16.388 [2024-12-06 04:07:09.668755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.388 [2024-12-06 04:07:09.668831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:16.388 [2024-12-06 04:07:09.668850] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:16.388 [2024-12-06 04:07:09.705545] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:16:16.388 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.388 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:16.388 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.388 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.388 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.388 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.388 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.388 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.388 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.388 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.388 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.388 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.388 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.388 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.388 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.649 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.649 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.649 "name": "raid_bdev1", 00:16:16.649 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:16.649 "strip_size_kb": 0, 00:16:16.649 "state": "online", 00:16:16.649 "raid_level": "raid1", 00:16:16.649 "superblock": true, 00:16:16.649 "num_base_bdevs": 4, 00:16:16.649 "num_base_bdevs_discovered": 3, 00:16:16.649 "num_base_bdevs_operational": 3, 00:16:16.649 "base_bdevs_list": [ 00:16:16.649 { 00:16:16.649 "name": null, 00:16:16.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.649 "is_configured": false, 00:16:16.649 "data_offset": 0, 00:16:16.649 "data_size": 63488 00:16:16.649 }, 00:16:16.649 { 00:16:16.649 "name": "BaseBdev2", 00:16:16.649 "uuid": "dc5c58f1-a0ed-544f-bfe0-ddfe4389ae48", 00:16:16.649 "is_configured": true, 00:16:16.649 "data_offset": 2048, 00:16:16.649 "data_size": 63488 00:16:16.649 }, 00:16:16.649 { 00:16:16.649 "name": "BaseBdev3", 00:16:16.649 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:16.649 "is_configured": true, 00:16:16.649 "data_offset": 2048, 00:16:16.649 "data_size": 63488 00:16:16.649 }, 00:16:16.649 { 00:16:16.649 "name": "BaseBdev4", 00:16:16.649 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:16.649 "is_configured": true, 00:16:16.649 "data_offset": 2048, 00:16:16.649 "data_size": 63488 00:16:16.649 } 00:16:16.649 ] 00:16:16.649 }' 00:16:16.649 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.649 04:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.910 143.50 IOPS, 430.50 MiB/s [2024-12-06T04:07:10.264Z] 04:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:16.910 04:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.910 04:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:16.910 04:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:16.910 04:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.910 04:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.910 04:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.910 04:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.910 04:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.910 04:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.910 04:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.910 "name": "raid_bdev1", 00:16:16.910 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:16.910 "strip_size_kb": 0, 00:16:16.910 "state": "online", 00:16:16.910 "raid_level": "raid1", 00:16:16.910 "superblock": true, 00:16:16.910 "num_base_bdevs": 4, 00:16:16.910 "num_base_bdevs_discovered": 3, 00:16:16.910 "num_base_bdevs_operational": 3, 00:16:16.910 "base_bdevs_list": [ 00:16:16.910 { 00:16:16.910 "name": null, 00:16:16.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.910 "is_configured": false, 00:16:16.910 "data_offset": 0, 00:16:16.910 "data_size": 63488 00:16:16.910 }, 00:16:16.910 { 00:16:16.910 "name": "BaseBdev2", 00:16:16.910 "uuid": "dc5c58f1-a0ed-544f-bfe0-ddfe4389ae48", 00:16:16.910 "is_configured": true, 00:16:16.910 "data_offset": 2048, 00:16:16.910 "data_size": 63488 00:16:16.910 }, 00:16:16.910 { 00:16:16.910 "name": "BaseBdev3", 00:16:16.910 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:16.910 "is_configured": true, 00:16:16.910 "data_offset": 2048, 00:16:16.910 "data_size": 63488 00:16:16.910 }, 00:16:16.910 { 00:16:16.910 "name": "BaseBdev4", 00:16:16.910 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:16.910 "is_configured": true, 00:16:16.910 "data_offset": 2048, 00:16:16.910 "data_size": 63488 00:16:16.910 } 00:16:16.910 ] 00:16:16.910 }' 00:16:16.910 04:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.910 04:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:16.910 04:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.910 04:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:16.910 04:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:16.910 04:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.910 04:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.910 [2024-12-06 04:07:10.241460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:17.169 04:07:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.169 04:07:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:17.169 [2024-12-06 04:07:10.325155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:17.169 [2024-12-06 04:07:10.327467] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:17.429 [2024-12-06 04:07:10.618663] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:17.690 145.33 IOPS, 436.00 MiB/s [2024-12-06T04:07:11.044Z] [2024-12-06 04:07:10.999958] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:17.690 [2024-12-06 04:07:11.000461] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:17.949 [2024-12-06 04:07:11.221377] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:17.949 [2024-12-06 04:07:11.221894] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:17.949 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.949 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.949 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.949 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.949 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.208 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.208 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.208 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.208 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.208 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.208 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.208 "name": "raid_bdev1", 00:16:18.208 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:18.208 "strip_size_kb": 0, 00:16:18.208 "state": "online", 00:16:18.208 "raid_level": "raid1", 00:16:18.208 "superblock": true, 00:16:18.208 "num_base_bdevs": 4, 00:16:18.208 "num_base_bdevs_discovered": 4, 00:16:18.208 "num_base_bdevs_operational": 4, 00:16:18.208 "process": { 00:16:18.208 "type": "rebuild", 00:16:18.208 "target": "spare", 00:16:18.208 "progress": { 00:16:18.208 "blocks": 10240, 00:16:18.208 "percent": 16 00:16:18.208 } 00:16:18.208 }, 00:16:18.208 "base_bdevs_list": [ 00:16:18.208 { 00:16:18.208 "name": "spare", 00:16:18.208 "uuid": "5839e026-dba2-5cf9-a2ff-d5da9aaf4211", 00:16:18.208 "is_configured": true, 00:16:18.208 "data_offset": 2048, 00:16:18.208 "data_size": 63488 00:16:18.208 }, 00:16:18.208 { 00:16:18.208 "name": "BaseBdev2", 00:16:18.208 "uuid": "dc5c58f1-a0ed-544f-bfe0-ddfe4389ae48", 00:16:18.208 "is_configured": true, 00:16:18.208 "data_offset": 2048, 00:16:18.208 "data_size": 63488 00:16:18.208 }, 00:16:18.208 { 00:16:18.208 "name": "BaseBdev3", 00:16:18.208 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:18.208 "is_configured": true, 00:16:18.208 "data_offset": 2048, 00:16:18.208 "data_size": 63488 00:16:18.208 }, 00:16:18.208 { 00:16:18.209 "name": "BaseBdev4", 00:16:18.209 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:18.209 "is_configured": true, 00:16:18.209 "data_offset": 2048, 00:16:18.209 "data_size": 63488 00:16:18.209 } 00:16:18.209 ] 00:16:18.209 }' 00:16:18.209 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.209 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.209 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.209 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.209 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:18.209 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:18.209 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:18.209 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:18.209 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:18.209 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:18.209 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:18.209 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.209 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.209 [2024-12-06 04:07:11.419642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:18.468 [2024-12-06 04:07:11.667344] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:18.468 [2024-12-06 04:07:11.667458] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:18.468 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.468 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:18.468 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:18.468 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.468 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.468 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.468 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.468 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.468 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.468 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.468 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.468 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.468 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.468 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.468 "name": "raid_bdev1", 00:16:18.468 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:18.468 "strip_size_kb": 0, 00:16:18.468 "state": "online", 00:16:18.468 "raid_level": "raid1", 00:16:18.468 "superblock": true, 00:16:18.468 "num_base_bdevs": 4, 00:16:18.468 "num_base_bdevs_discovered": 3, 00:16:18.468 "num_base_bdevs_operational": 3, 00:16:18.468 "process": { 00:16:18.468 "type": "rebuild", 00:16:18.468 "target": "spare", 00:16:18.468 "progress": { 00:16:18.468 "blocks": 12288, 00:16:18.468 "percent": 19 00:16:18.468 } 00:16:18.468 }, 00:16:18.468 "base_bdevs_list": [ 00:16:18.468 { 00:16:18.468 "name": "spare", 00:16:18.468 "uuid": "5839e026-dba2-5cf9-a2ff-d5da9aaf4211", 00:16:18.468 "is_configured": true, 00:16:18.468 "data_offset": 2048, 00:16:18.468 "data_size": 63488 00:16:18.468 }, 00:16:18.468 { 00:16:18.468 "name": null, 00:16:18.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.468 "is_configured": false, 00:16:18.468 "data_offset": 0, 00:16:18.468 "data_size": 63488 00:16:18.468 }, 00:16:18.468 { 00:16:18.468 "name": "BaseBdev3", 00:16:18.468 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:18.468 "is_configured": true, 00:16:18.468 "data_offset": 2048, 00:16:18.468 "data_size": 63488 00:16:18.468 }, 00:16:18.468 { 00:16:18.468 "name": "BaseBdev4", 00:16:18.468 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:18.468 "is_configured": true, 00:16:18.468 "data_offset": 2048, 00:16:18.468 "data_size": 63488 00:16:18.468 } 00:16:18.468 ] 00:16:18.468 }' 00:16:18.468 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.468 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.468 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.469 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.469 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=509 00:16:18.469 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:18.469 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.469 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.469 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.469 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.469 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.469 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.469 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.469 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.469 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.469 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.728 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.728 "name": "raid_bdev1", 00:16:18.728 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:18.728 "strip_size_kb": 0, 00:16:18.728 "state": "online", 00:16:18.728 "raid_level": "raid1", 00:16:18.728 "superblock": true, 00:16:18.728 "num_base_bdevs": 4, 00:16:18.728 "num_base_bdevs_discovered": 3, 00:16:18.728 "num_base_bdevs_operational": 3, 00:16:18.728 "process": { 00:16:18.728 "type": "rebuild", 00:16:18.728 "target": "spare", 00:16:18.728 "progress": { 00:16:18.728 "blocks": 12288, 00:16:18.728 "percent": 19 00:16:18.728 } 00:16:18.728 }, 00:16:18.728 "base_bdevs_list": [ 00:16:18.728 { 00:16:18.728 "name": "spare", 00:16:18.728 "uuid": "5839e026-dba2-5cf9-a2ff-d5da9aaf4211", 00:16:18.728 "is_configured": true, 00:16:18.728 "data_offset": 2048, 00:16:18.728 "data_size": 63488 00:16:18.728 }, 00:16:18.728 { 00:16:18.728 "name": null, 00:16:18.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.728 "is_configured": false, 00:16:18.728 "data_offset": 0, 00:16:18.728 "data_size": 63488 00:16:18.728 }, 00:16:18.728 { 00:16:18.728 "name": "BaseBdev3", 00:16:18.728 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:18.728 "is_configured": true, 00:16:18.728 "data_offset": 2048, 00:16:18.728 "data_size": 63488 00:16:18.728 }, 00:16:18.728 { 00:16:18.728 "name": "BaseBdev4", 00:16:18.728 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:18.728 "is_configured": true, 00:16:18.728 "data_offset": 2048, 00:16:18.728 "data_size": 63488 00:16:18.728 } 00:16:18.728 ] 00:16:18.728 }' 00:16:18.728 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.728 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.728 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.728 [2024-12-06 04:07:11.912063] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:18.728 [2024-12-06 04:07:11.912427] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:18.728 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.728 04:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:18.987 132.50 IOPS, 397.50 MiB/s [2024-12-06T04:07:12.341Z] [2024-12-06 04:07:12.248619] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:19.247 [2024-12-06 04:07:12.485152] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:19.506 [2024-12-06 04:07:12.818752] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:19.506 [2024-12-06 04:07:12.819932] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:19.766 04:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:19.766 04:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.766 04:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.766 04:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.766 04:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.766 04:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.766 04:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.766 04:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.766 04:07:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.766 04:07:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.766 04:07:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.766 04:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.766 "name": "raid_bdev1", 00:16:19.766 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:19.766 "strip_size_kb": 0, 00:16:19.766 "state": "online", 00:16:19.766 "raid_level": "raid1", 00:16:19.766 "superblock": true, 00:16:19.766 "num_base_bdevs": 4, 00:16:19.766 "num_base_bdevs_discovered": 3, 00:16:19.766 "num_base_bdevs_operational": 3, 00:16:19.766 "process": { 00:16:19.766 "type": "rebuild", 00:16:19.766 "target": "spare", 00:16:19.766 "progress": { 00:16:19.766 "blocks": 26624, 00:16:19.766 "percent": 41 00:16:19.766 } 00:16:19.766 }, 00:16:19.766 "base_bdevs_list": [ 00:16:19.766 { 00:16:19.766 "name": "spare", 00:16:19.766 "uuid": "5839e026-dba2-5cf9-a2ff-d5da9aaf4211", 00:16:19.766 "is_configured": true, 00:16:19.766 "data_offset": 2048, 00:16:19.766 "data_size": 63488 00:16:19.766 }, 00:16:19.766 { 00:16:19.766 "name": null, 00:16:19.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.766 "is_configured": false, 00:16:19.766 "data_offset": 0, 00:16:19.766 "data_size": 63488 00:16:19.766 }, 00:16:19.766 { 00:16:19.766 "name": "BaseBdev3", 00:16:19.766 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:19.766 "is_configured": true, 00:16:19.766 "data_offset": 2048, 00:16:19.766 "data_size": 63488 00:16:19.766 }, 00:16:19.766 { 00:16:19.766 "name": "BaseBdev4", 00:16:19.766 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:19.766 "is_configured": true, 00:16:19.766 "data_offset": 2048, 00:16:19.766 "data_size": 63488 00:16:19.766 } 00:16:19.766 ] 00:16:19.766 }' 00:16:19.766 04:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.766 04:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.766 114.80 IOPS, 344.40 MiB/s [2024-12-06T04:07:13.120Z] 04:07:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.766 04:07:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.766 04:07:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:20.336 [2024-12-06 04:07:13.443135] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:20.336 [2024-12-06 04:07:13.673094] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:20.596 [2024-12-06 04:07:13.784872] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:20.596 [2024-12-06 04:07:13.785308] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:20.856 104.67 IOPS, 314.00 MiB/s [2024-12-06T04:07:14.210Z] [2024-12-06 04:07:14.024300] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:16:20.856 04:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:20.856 04:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.856 04:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.856 04:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.856 04:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.856 04:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.856 04:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.856 04:07:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.856 04:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.856 04:07:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.856 04:07:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.856 04:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.856 "name": "raid_bdev1", 00:16:20.856 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:20.856 "strip_size_kb": 0, 00:16:20.856 "state": "online", 00:16:20.856 "raid_level": "raid1", 00:16:20.856 "superblock": true, 00:16:20.856 "num_base_bdevs": 4, 00:16:20.856 "num_base_bdevs_discovered": 3, 00:16:20.856 "num_base_bdevs_operational": 3, 00:16:20.856 "process": { 00:16:20.856 "type": "rebuild", 00:16:20.856 "target": "spare", 00:16:20.856 "progress": { 00:16:20.856 "blocks": 45056, 00:16:20.856 "percent": 70 00:16:20.856 } 00:16:20.856 }, 00:16:20.856 "base_bdevs_list": [ 00:16:20.856 { 00:16:20.856 "name": "spare", 00:16:20.856 "uuid": "5839e026-dba2-5cf9-a2ff-d5da9aaf4211", 00:16:20.856 "is_configured": true, 00:16:20.856 "data_offset": 2048, 00:16:20.856 "data_size": 63488 00:16:20.856 }, 00:16:20.856 { 00:16:20.856 "name": null, 00:16:20.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.856 "is_configured": false, 00:16:20.856 "data_offset": 0, 00:16:20.856 "data_size": 63488 00:16:20.856 }, 00:16:20.856 { 00:16:20.856 "name": "BaseBdev3", 00:16:20.856 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:20.856 "is_configured": true, 00:16:20.856 "data_offset": 2048, 00:16:20.856 "data_size": 63488 00:16:20.856 }, 00:16:20.856 { 00:16:20.856 "name": "BaseBdev4", 00:16:20.856 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:20.856 "is_configured": true, 00:16:20.856 "data_offset": 2048, 00:16:20.856 "data_size": 63488 00:16:20.856 } 00:16:20.856 ] 00:16:20.856 }' 00:16:20.856 04:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.856 04:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.856 04:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.856 04:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.856 04:07:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:21.116 [2024-12-06 04:07:14.468861] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:16:21.946 94.43 IOPS, 283.29 MiB/s [2024-12-06T04:07:15.300Z] 04:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:21.946 04:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.946 04:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.946 04:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.946 04:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.946 04:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.946 04:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.946 04:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.946 04:07:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.946 04:07:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.946 04:07:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.946 04:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.946 "name": "raid_bdev1", 00:16:21.946 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:21.946 "strip_size_kb": 0, 00:16:21.946 "state": "online", 00:16:21.946 "raid_level": "raid1", 00:16:21.946 "superblock": true, 00:16:21.946 "num_base_bdevs": 4, 00:16:21.946 "num_base_bdevs_discovered": 3, 00:16:21.946 "num_base_bdevs_operational": 3, 00:16:21.946 "process": { 00:16:21.946 "type": "rebuild", 00:16:21.946 "target": "spare", 00:16:21.946 "progress": { 00:16:21.946 "blocks": 61440, 00:16:21.946 "percent": 96 00:16:21.946 } 00:16:21.946 }, 00:16:21.946 "base_bdevs_list": [ 00:16:21.946 { 00:16:21.946 "name": "spare", 00:16:21.946 "uuid": "5839e026-dba2-5cf9-a2ff-d5da9aaf4211", 00:16:21.946 "is_configured": true, 00:16:21.946 "data_offset": 2048, 00:16:21.946 "data_size": 63488 00:16:21.946 }, 00:16:21.946 { 00:16:21.946 "name": null, 00:16:21.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.946 "is_configured": false, 00:16:21.946 "data_offset": 0, 00:16:21.946 "data_size": 63488 00:16:21.946 }, 00:16:21.946 { 00:16:21.946 "name": "BaseBdev3", 00:16:21.946 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:21.946 "is_configured": true, 00:16:21.946 "data_offset": 2048, 00:16:21.946 "data_size": 63488 00:16:21.946 }, 00:16:21.946 { 00:16:21.946 "name": "BaseBdev4", 00:16:21.946 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:21.946 "is_configured": true, 00:16:21.946 "data_offset": 2048, 00:16:21.946 "data_size": 63488 00:16:21.946 } 00:16:21.946 ] 00:16:21.946 }' 00:16:21.946 04:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.946 [2024-12-06 04:07:15.237300] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:21.946 04:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.946 04:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.206 04:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.206 04:07:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:22.206 [2024-12-06 04:07:15.337205] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:22.206 [2024-12-06 04:07:15.348135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.075 87.00 IOPS, 261.00 MiB/s [2024-12-06T04:07:16.429Z] 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:23.075 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.075 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.075 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.075 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.075 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.075 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.075 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.075 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.075 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.075 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.075 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.075 "name": "raid_bdev1", 00:16:23.075 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:23.075 "strip_size_kb": 0, 00:16:23.075 "state": "online", 00:16:23.075 "raid_level": "raid1", 00:16:23.075 "superblock": true, 00:16:23.075 "num_base_bdevs": 4, 00:16:23.075 "num_base_bdevs_discovered": 3, 00:16:23.075 "num_base_bdevs_operational": 3, 00:16:23.075 "base_bdevs_list": [ 00:16:23.075 { 00:16:23.075 "name": "spare", 00:16:23.075 "uuid": "5839e026-dba2-5cf9-a2ff-d5da9aaf4211", 00:16:23.075 "is_configured": true, 00:16:23.075 "data_offset": 2048, 00:16:23.075 "data_size": 63488 00:16:23.075 }, 00:16:23.075 { 00:16:23.075 "name": null, 00:16:23.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.075 "is_configured": false, 00:16:23.075 "data_offset": 0, 00:16:23.075 "data_size": 63488 00:16:23.075 }, 00:16:23.075 { 00:16:23.075 "name": "BaseBdev3", 00:16:23.075 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:23.075 "is_configured": true, 00:16:23.075 "data_offset": 2048, 00:16:23.076 "data_size": 63488 00:16:23.076 }, 00:16:23.076 { 00:16:23.076 "name": "BaseBdev4", 00:16:23.076 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:23.076 "is_configured": true, 00:16:23.076 "data_offset": 2048, 00:16:23.076 "data_size": 63488 00:16:23.076 } 00:16:23.076 ] 00:16:23.076 }' 00:16:23.076 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.076 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:23.076 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.334 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:23.334 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:23.334 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:23.334 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.334 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:23.334 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.335 "name": "raid_bdev1", 00:16:23.335 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:23.335 "strip_size_kb": 0, 00:16:23.335 "state": "online", 00:16:23.335 "raid_level": "raid1", 00:16:23.335 "superblock": true, 00:16:23.335 "num_base_bdevs": 4, 00:16:23.335 "num_base_bdevs_discovered": 3, 00:16:23.335 "num_base_bdevs_operational": 3, 00:16:23.335 "base_bdevs_list": [ 00:16:23.335 { 00:16:23.335 "name": "spare", 00:16:23.335 "uuid": "5839e026-dba2-5cf9-a2ff-d5da9aaf4211", 00:16:23.335 "is_configured": true, 00:16:23.335 "data_offset": 2048, 00:16:23.335 "data_size": 63488 00:16:23.335 }, 00:16:23.335 { 00:16:23.335 "name": null, 00:16:23.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.335 "is_configured": false, 00:16:23.335 "data_offset": 0, 00:16:23.335 "data_size": 63488 00:16:23.335 }, 00:16:23.335 { 00:16:23.335 "name": "BaseBdev3", 00:16:23.335 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:23.335 "is_configured": true, 00:16:23.335 "data_offset": 2048, 00:16:23.335 "data_size": 63488 00:16:23.335 }, 00:16:23.335 { 00:16:23.335 "name": "BaseBdev4", 00:16:23.335 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:23.335 "is_configured": true, 00:16:23.335 "data_offset": 2048, 00:16:23.335 "data_size": 63488 00:16:23.335 } 00:16:23.335 ] 00:16:23.335 }' 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.335 "name": "raid_bdev1", 00:16:23.335 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:23.335 "strip_size_kb": 0, 00:16:23.335 "state": "online", 00:16:23.335 "raid_level": "raid1", 00:16:23.335 "superblock": true, 00:16:23.335 "num_base_bdevs": 4, 00:16:23.335 "num_base_bdevs_discovered": 3, 00:16:23.335 "num_base_bdevs_operational": 3, 00:16:23.335 "base_bdevs_list": [ 00:16:23.335 { 00:16:23.335 "name": "spare", 00:16:23.335 "uuid": "5839e026-dba2-5cf9-a2ff-d5da9aaf4211", 00:16:23.335 "is_configured": true, 00:16:23.335 "data_offset": 2048, 00:16:23.335 "data_size": 63488 00:16:23.335 }, 00:16:23.335 { 00:16:23.335 "name": null, 00:16:23.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.335 "is_configured": false, 00:16:23.335 "data_offset": 0, 00:16:23.335 "data_size": 63488 00:16:23.335 }, 00:16:23.335 { 00:16:23.335 "name": "BaseBdev3", 00:16:23.335 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:23.335 "is_configured": true, 00:16:23.335 "data_offset": 2048, 00:16:23.335 "data_size": 63488 00:16:23.335 }, 00:16:23.335 { 00:16:23.335 "name": "BaseBdev4", 00:16:23.335 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:23.335 "is_configured": true, 00:16:23.335 "data_offset": 2048, 00:16:23.335 "data_size": 63488 00:16:23.335 } 00:16:23.335 ] 00:16:23.335 }' 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.335 04:07:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.902 81.78 IOPS, 245.33 MiB/s [2024-12-06T04:07:17.256Z] 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:23.902 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.902 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.902 [2024-12-06 04:07:17.025913] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:23.902 [2024-12-06 04:07:17.026008] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:23.902 00:16:23.902 Latency(us) 00:16:23.902 [2024-12-06T04:07:17.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.902 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:23.902 raid_bdev1 : 9.14 81.61 244.84 0.00 0.00 16170.35 321.96 120883.87 00:16:23.902 [2024-12-06T04:07:17.256Z] =================================================================================================================== 00:16:23.902 [2024-12-06T04:07:17.256Z] Total : 81.61 244.84 0.00 0.00 16170.35 321.96 120883.87 00:16:23.902 [2024-12-06 04:07:17.145077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.902 { 00:16:23.902 "results": [ 00:16:23.902 { 00:16:23.902 "job": "raid_bdev1", 00:16:23.902 "core_mask": "0x1", 00:16:23.902 "workload": "randrw", 00:16:23.902 "percentage": 50, 00:16:23.902 "status": "finished", 00:16:23.902 "queue_depth": 2, 00:16:23.902 "io_size": 3145728, 00:16:23.902 "runtime": 9.14065, 00:16:23.902 "iops": 81.61345199739624, 00:16:23.902 "mibps": 244.84035599218873, 00:16:23.902 "io_failed": 0, 00:16:23.902 "io_timeout": 0, 00:16:23.902 "avg_latency_us": 16170.351475701558, 00:16:23.902 "min_latency_us": 321.95633187772927, 00:16:23.902 "max_latency_us": 120883.87074235808 00:16:23.902 } 00:16:23.902 ], 00:16:23.902 "core_count": 1 00:16:23.902 } 00:16:23.902 [2024-12-06 04:07:17.145222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.902 [2024-12-06 04:07:17.145346] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:23.902 [2024-12-06 04:07:17.145362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:23.902 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.902 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.902 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.902 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.902 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:23.902 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.902 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:23.902 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:23.902 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:23.902 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:23.902 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:23.902 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:23.902 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:23.902 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:23.902 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:23.902 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:23.902 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:23.902 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:23.902 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:24.160 /dev/nbd0 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:24.160 1+0 records in 00:16:24.160 1+0 records out 00:16:24.160 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615504 s, 6.7 MB/s 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:24.160 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:24.419 /dev/nbd1 00:16:24.419 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:24.419 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:24.419 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:24.419 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:24.419 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:24.419 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:24.419 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:24.419 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:24.419 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:24.419 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:24.419 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:24.419 1+0 records in 00:16:24.419 1+0 records out 00:16:24.419 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304131 s, 13.5 MB/s 00:16:24.419 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.419 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:24.419 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.419 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:24.419 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:24.419 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:24.419 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:24.419 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:24.677 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:24.677 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:24.677 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:24.677 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:24.677 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:24.677 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:24.677 04:07:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:24.936 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:24.936 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:24.936 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:24.936 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:24.936 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:24.936 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:24.936 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:24.936 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:24.936 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:24.936 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:24.936 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:24.936 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:24.936 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:24.936 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:24.936 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:24.936 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:24.936 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:24.936 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:24.936 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:24.936 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:25.196 /dev/nbd1 00:16:25.196 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:25.196 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:25.196 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:25.196 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:25.196 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:25.196 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:25.196 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:25.196 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:25.196 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:25.196 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:25.196 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:25.196 1+0 records in 00:16:25.196 1+0 records out 00:16:25.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307282 s, 13.3 MB/s 00:16:25.196 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:25.196 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:25.196 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:25.196 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:25.196 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:25.196 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:25.196 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:25.196 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:25.455 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:25.455 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:25.455 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:25.455 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:25.455 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:25.455 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:25.455 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:25.713 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:25.713 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:25.713 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:25.713 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:25.713 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:25.713 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:25.713 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:25.713 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:25.713 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:25.713 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:25.713 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:25.713 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:25.713 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:25.713 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:25.713 04:07:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:25.713 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:25.713 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:25.713 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:25.713 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:25.713 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:25.713 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:25.713 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:25.713 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:25.713 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:25.713 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:25.713 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.713 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.972 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.972 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:25.972 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.972 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.972 [2024-12-06 04:07:19.077556] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:25.972 [2024-12-06 04:07:19.077635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.972 [2024-12-06 04:07:19.077661] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:25.972 [2024-12-06 04:07:19.077674] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.972 [2024-12-06 04:07:19.080286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.972 [2024-12-06 04:07:19.080334] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:25.972 [2024-12-06 04:07:19.080450] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:25.972 [2024-12-06 04:07:19.080533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:25.972 [2024-12-06 04:07:19.080697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:25.972 [2024-12-06 04:07:19.080843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:25.972 spare 00:16:25.972 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.972 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:25.972 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.972 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.972 [2024-12-06 04:07:19.180772] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:25.972 [2024-12-06 04:07:19.180846] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:25.972 [2024-12-06 04:07:19.181293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:16:25.972 [2024-12-06 04:07:19.181551] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:25.972 [2024-12-06 04:07:19.181572] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:25.972 [2024-12-06 04:07:19.181829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.972 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.972 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:25.972 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.972 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.972 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.972 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.972 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.972 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.972 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.972 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.972 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.972 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.972 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.972 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.972 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.973 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.973 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.973 "name": "raid_bdev1", 00:16:25.973 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:25.973 "strip_size_kb": 0, 00:16:25.973 "state": "online", 00:16:25.973 "raid_level": "raid1", 00:16:25.973 "superblock": true, 00:16:25.973 "num_base_bdevs": 4, 00:16:25.973 "num_base_bdevs_discovered": 3, 00:16:25.973 "num_base_bdevs_operational": 3, 00:16:25.973 "base_bdevs_list": [ 00:16:25.973 { 00:16:25.973 "name": "spare", 00:16:25.973 "uuid": "5839e026-dba2-5cf9-a2ff-d5da9aaf4211", 00:16:25.973 "is_configured": true, 00:16:25.973 "data_offset": 2048, 00:16:25.973 "data_size": 63488 00:16:25.973 }, 00:16:25.973 { 00:16:25.973 "name": null, 00:16:25.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.973 "is_configured": false, 00:16:25.973 "data_offset": 2048, 00:16:25.973 "data_size": 63488 00:16:25.973 }, 00:16:25.973 { 00:16:25.973 "name": "BaseBdev3", 00:16:25.973 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:25.973 "is_configured": true, 00:16:25.973 "data_offset": 2048, 00:16:25.973 "data_size": 63488 00:16:25.973 }, 00:16:25.973 { 00:16:25.973 "name": "BaseBdev4", 00:16:25.973 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:25.973 "is_configured": true, 00:16:25.973 "data_offset": 2048, 00:16:25.973 "data_size": 63488 00:16:25.973 } 00:16:25.973 ] 00:16:25.973 }' 00:16:25.973 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.973 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.541 "name": "raid_bdev1", 00:16:26.541 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:26.541 "strip_size_kb": 0, 00:16:26.541 "state": "online", 00:16:26.541 "raid_level": "raid1", 00:16:26.541 "superblock": true, 00:16:26.541 "num_base_bdevs": 4, 00:16:26.541 "num_base_bdevs_discovered": 3, 00:16:26.541 "num_base_bdevs_operational": 3, 00:16:26.541 "base_bdevs_list": [ 00:16:26.541 { 00:16:26.541 "name": "spare", 00:16:26.541 "uuid": "5839e026-dba2-5cf9-a2ff-d5da9aaf4211", 00:16:26.541 "is_configured": true, 00:16:26.541 "data_offset": 2048, 00:16:26.541 "data_size": 63488 00:16:26.541 }, 00:16:26.541 { 00:16:26.541 "name": null, 00:16:26.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.541 "is_configured": false, 00:16:26.541 "data_offset": 2048, 00:16:26.541 "data_size": 63488 00:16:26.541 }, 00:16:26.541 { 00:16:26.541 "name": "BaseBdev3", 00:16:26.541 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:26.541 "is_configured": true, 00:16:26.541 "data_offset": 2048, 00:16:26.541 "data_size": 63488 00:16:26.541 }, 00:16:26.541 { 00:16:26.541 "name": "BaseBdev4", 00:16:26.541 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:26.541 "is_configured": true, 00:16:26.541 "data_offset": 2048, 00:16:26.541 "data_size": 63488 00:16:26.541 } 00:16:26.541 ] 00:16:26.541 }' 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.541 [2024-12-06 04:07:19.832828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.541 "name": "raid_bdev1", 00:16:26.541 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:26.541 "strip_size_kb": 0, 00:16:26.541 "state": "online", 00:16:26.541 "raid_level": "raid1", 00:16:26.541 "superblock": true, 00:16:26.541 "num_base_bdevs": 4, 00:16:26.541 "num_base_bdevs_discovered": 2, 00:16:26.541 "num_base_bdevs_operational": 2, 00:16:26.541 "base_bdevs_list": [ 00:16:26.541 { 00:16:26.541 "name": null, 00:16:26.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.541 "is_configured": false, 00:16:26.541 "data_offset": 0, 00:16:26.541 "data_size": 63488 00:16:26.541 }, 00:16:26.541 { 00:16:26.541 "name": null, 00:16:26.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.541 "is_configured": false, 00:16:26.541 "data_offset": 2048, 00:16:26.541 "data_size": 63488 00:16:26.541 }, 00:16:26.541 { 00:16:26.541 "name": "BaseBdev3", 00:16:26.541 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:26.541 "is_configured": true, 00:16:26.541 "data_offset": 2048, 00:16:26.541 "data_size": 63488 00:16:26.541 }, 00:16:26.541 { 00:16:26.541 "name": "BaseBdev4", 00:16:26.541 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:26.541 "is_configured": true, 00:16:26.541 "data_offset": 2048, 00:16:26.541 "data_size": 63488 00:16:26.541 } 00:16:26.541 ] 00:16:26.541 }' 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.541 04:07:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.109 04:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:27.109 04:07:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.109 04:07:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.109 [2024-12-06 04:07:20.184662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:27.109 [2024-12-06 04:07:20.184888] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:27.109 [2024-12-06 04:07:20.184915] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:27.109 [2024-12-06 04:07:20.184955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:27.109 [2024-12-06 04:07:20.203132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:16:27.109 04:07:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.109 04:07:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:27.109 [2024-12-06 04:07:20.205403] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:28.043 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.043 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.043 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.043 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.043 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.043 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.043 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.043 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.043 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.043 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.043 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.043 "name": "raid_bdev1", 00:16:28.043 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:28.043 "strip_size_kb": 0, 00:16:28.043 "state": "online", 00:16:28.043 "raid_level": "raid1", 00:16:28.043 "superblock": true, 00:16:28.043 "num_base_bdevs": 4, 00:16:28.043 "num_base_bdevs_discovered": 3, 00:16:28.043 "num_base_bdevs_operational": 3, 00:16:28.043 "process": { 00:16:28.043 "type": "rebuild", 00:16:28.043 "target": "spare", 00:16:28.043 "progress": { 00:16:28.043 "blocks": 20480, 00:16:28.043 "percent": 32 00:16:28.043 } 00:16:28.043 }, 00:16:28.043 "base_bdevs_list": [ 00:16:28.043 { 00:16:28.043 "name": "spare", 00:16:28.043 "uuid": "5839e026-dba2-5cf9-a2ff-d5da9aaf4211", 00:16:28.043 "is_configured": true, 00:16:28.043 "data_offset": 2048, 00:16:28.043 "data_size": 63488 00:16:28.043 }, 00:16:28.043 { 00:16:28.043 "name": null, 00:16:28.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.043 "is_configured": false, 00:16:28.043 "data_offset": 2048, 00:16:28.043 "data_size": 63488 00:16:28.043 }, 00:16:28.043 { 00:16:28.043 "name": "BaseBdev3", 00:16:28.043 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:28.043 "is_configured": true, 00:16:28.043 "data_offset": 2048, 00:16:28.043 "data_size": 63488 00:16:28.043 }, 00:16:28.043 { 00:16:28.043 "name": "BaseBdev4", 00:16:28.043 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:28.043 "is_configured": true, 00:16:28.043 "data_offset": 2048, 00:16:28.043 "data_size": 63488 00:16:28.043 } 00:16:28.043 ] 00:16:28.043 }' 00:16:28.043 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.043 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.043 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.043 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.043 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:28.043 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.043 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.043 [2024-12-06 04:07:21.328861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:28.301 [2024-12-06 04:07:21.411685] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:28.301 [2024-12-06 04:07:21.411791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.301 [2024-12-06 04:07:21.411810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:28.301 [2024-12-06 04:07:21.411820] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:28.301 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.301 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:28.301 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.301 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.301 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.301 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.301 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:28.301 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.301 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.301 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.301 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.301 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.301 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.301 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.301 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.301 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.301 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.301 "name": "raid_bdev1", 00:16:28.301 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:28.301 "strip_size_kb": 0, 00:16:28.301 "state": "online", 00:16:28.301 "raid_level": "raid1", 00:16:28.301 "superblock": true, 00:16:28.301 "num_base_bdevs": 4, 00:16:28.301 "num_base_bdevs_discovered": 2, 00:16:28.301 "num_base_bdevs_operational": 2, 00:16:28.301 "base_bdevs_list": [ 00:16:28.301 { 00:16:28.301 "name": null, 00:16:28.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.301 "is_configured": false, 00:16:28.301 "data_offset": 0, 00:16:28.301 "data_size": 63488 00:16:28.301 }, 00:16:28.301 { 00:16:28.301 "name": null, 00:16:28.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.301 "is_configured": false, 00:16:28.301 "data_offset": 2048, 00:16:28.301 "data_size": 63488 00:16:28.301 }, 00:16:28.301 { 00:16:28.301 "name": "BaseBdev3", 00:16:28.301 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:28.301 "is_configured": true, 00:16:28.301 "data_offset": 2048, 00:16:28.301 "data_size": 63488 00:16:28.301 }, 00:16:28.301 { 00:16:28.301 "name": "BaseBdev4", 00:16:28.301 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:28.301 "is_configured": true, 00:16:28.301 "data_offset": 2048, 00:16:28.301 "data_size": 63488 00:16:28.301 } 00:16:28.301 ] 00:16:28.301 }' 00:16:28.301 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.301 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.561 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:28.561 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.561 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.561 [2024-12-06 04:07:21.806654] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:28.561 [2024-12-06 04:07:21.806744] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.561 [2024-12-06 04:07:21.806779] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:28.561 [2024-12-06 04:07:21.806796] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.561 [2024-12-06 04:07:21.807385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.561 [2024-12-06 04:07:21.807425] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:28.561 [2024-12-06 04:07:21.807538] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:28.561 [2024-12-06 04:07:21.807563] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:28.561 [2024-12-06 04:07:21.807574] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:28.561 [2024-12-06 04:07:21.807602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:28.561 [2024-12-06 04:07:21.825088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:16:28.561 spare 00:16:28.561 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.561 04:07:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:28.561 [2024-12-06 04:07:21.827327] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:29.498 04:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.498 04:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.498 04:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.498 04:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.498 04:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.498 04:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.498 04:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.498 04:07:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.498 04:07:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.498 04:07:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.758 04:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.758 "name": "raid_bdev1", 00:16:29.758 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:29.758 "strip_size_kb": 0, 00:16:29.758 "state": "online", 00:16:29.758 "raid_level": "raid1", 00:16:29.758 "superblock": true, 00:16:29.758 "num_base_bdevs": 4, 00:16:29.758 "num_base_bdevs_discovered": 3, 00:16:29.758 "num_base_bdevs_operational": 3, 00:16:29.758 "process": { 00:16:29.758 "type": "rebuild", 00:16:29.758 "target": "spare", 00:16:29.758 "progress": { 00:16:29.758 "blocks": 20480, 00:16:29.758 "percent": 32 00:16:29.758 } 00:16:29.758 }, 00:16:29.758 "base_bdevs_list": [ 00:16:29.758 { 00:16:29.758 "name": "spare", 00:16:29.758 "uuid": "5839e026-dba2-5cf9-a2ff-d5da9aaf4211", 00:16:29.758 "is_configured": true, 00:16:29.758 "data_offset": 2048, 00:16:29.758 "data_size": 63488 00:16:29.758 }, 00:16:29.758 { 00:16:29.758 "name": null, 00:16:29.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.758 "is_configured": false, 00:16:29.758 "data_offset": 2048, 00:16:29.758 "data_size": 63488 00:16:29.758 }, 00:16:29.758 { 00:16:29.758 "name": "BaseBdev3", 00:16:29.758 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:29.758 "is_configured": true, 00:16:29.758 "data_offset": 2048, 00:16:29.758 "data_size": 63488 00:16:29.758 }, 00:16:29.758 { 00:16:29.758 "name": "BaseBdev4", 00:16:29.758 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:29.758 "is_configured": true, 00:16:29.758 "data_offset": 2048, 00:16:29.758 "data_size": 63488 00:16:29.758 } 00:16:29.758 ] 00:16:29.758 }' 00:16:29.758 04:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.758 04:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.758 04:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.758 04:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.758 04:07:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:29.758 04:07:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.758 04:07:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.758 [2024-12-06 04:07:22.950819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.758 [2024-12-06 04:07:23.033725] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:29.758 [2024-12-06 04:07:23.033819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.758 [2024-12-06 04:07:23.033841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.758 [2024-12-06 04:07:23.033850] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:29.758 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.758 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:29.758 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.758 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.758 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.758 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.758 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:29.758 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.758 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.758 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.758 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.758 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.758 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.758 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.758 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.758 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.758 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.758 "name": "raid_bdev1", 00:16:29.758 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:29.758 "strip_size_kb": 0, 00:16:29.758 "state": "online", 00:16:29.758 "raid_level": "raid1", 00:16:29.758 "superblock": true, 00:16:29.758 "num_base_bdevs": 4, 00:16:29.758 "num_base_bdevs_discovered": 2, 00:16:29.758 "num_base_bdevs_operational": 2, 00:16:29.758 "base_bdevs_list": [ 00:16:29.758 { 00:16:29.758 "name": null, 00:16:29.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.758 "is_configured": false, 00:16:29.758 "data_offset": 0, 00:16:29.758 "data_size": 63488 00:16:29.758 }, 00:16:29.758 { 00:16:29.758 "name": null, 00:16:29.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.758 "is_configured": false, 00:16:29.758 "data_offset": 2048, 00:16:29.758 "data_size": 63488 00:16:29.758 }, 00:16:29.758 { 00:16:29.758 "name": "BaseBdev3", 00:16:29.758 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:29.758 "is_configured": true, 00:16:29.758 "data_offset": 2048, 00:16:29.758 "data_size": 63488 00:16:29.758 }, 00:16:29.758 { 00:16:29.758 "name": "BaseBdev4", 00:16:29.758 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:29.758 "is_configured": true, 00:16:29.758 "data_offset": 2048, 00:16:29.758 "data_size": 63488 00:16:29.758 } 00:16:29.758 ] 00:16:29.758 }' 00:16:29.758 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.758 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.329 "name": "raid_bdev1", 00:16:30.329 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:30.329 "strip_size_kb": 0, 00:16:30.329 "state": "online", 00:16:30.329 "raid_level": "raid1", 00:16:30.329 "superblock": true, 00:16:30.329 "num_base_bdevs": 4, 00:16:30.329 "num_base_bdevs_discovered": 2, 00:16:30.329 "num_base_bdevs_operational": 2, 00:16:30.329 "base_bdevs_list": [ 00:16:30.329 { 00:16:30.329 "name": null, 00:16:30.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.329 "is_configured": false, 00:16:30.329 "data_offset": 0, 00:16:30.329 "data_size": 63488 00:16:30.329 }, 00:16:30.329 { 00:16:30.329 "name": null, 00:16:30.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.329 "is_configured": false, 00:16:30.329 "data_offset": 2048, 00:16:30.329 "data_size": 63488 00:16:30.329 }, 00:16:30.329 { 00:16:30.329 "name": "BaseBdev3", 00:16:30.329 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:30.329 "is_configured": true, 00:16:30.329 "data_offset": 2048, 00:16:30.329 "data_size": 63488 00:16:30.329 }, 00:16:30.329 { 00:16:30.329 "name": "BaseBdev4", 00:16:30.329 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:30.329 "is_configured": true, 00:16:30.329 "data_offset": 2048, 00:16:30.329 "data_size": 63488 00:16:30.329 } 00:16:30.329 ] 00:16:30.329 }' 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.329 [2024-12-06 04:07:23.558007] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:30.329 [2024-12-06 04:07:23.558090] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.329 [2024-12-06 04:07:23.558117] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:16:30.329 [2024-12-06 04:07:23.558128] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.329 [2024-12-06 04:07:23.558642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.329 [2024-12-06 04:07:23.558673] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:30.329 [2024-12-06 04:07:23.558777] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:30.329 [2024-12-06 04:07:23.558803] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:30.329 [2024-12-06 04:07:23.558818] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:30.329 [2024-12-06 04:07:23.558831] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:30.329 BaseBdev1 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.329 04:07:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:31.265 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:31.266 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.266 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.266 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.266 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.266 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.266 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.266 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.266 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.266 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.266 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.266 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.266 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.266 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.266 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.266 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.266 "name": "raid_bdev1", 00:16:31.266 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:31.266 "strip_size_kb": 0, 00:16:31.266 "state": "online", 00:16:31.266 "raid_level": "raid1", 00:16:31.266 "superblock": true, 00:16:31.266 "num_base_bdevs": 4, 00:16:31.266 "num_base_bdevs_discovered": 2, 00:16:31.266 "num_base_bdevs_operational": 2, 00:16:31.266 "base_bdevs_list": [ 00:16:31.266 { 00:16:31.266 "name": null, 00:16:31.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.266 "is_configured": false, 00:16:31.266 "data_offset": 0, 00:16:31.266 "data_size": 63488 00:16:31.266 }, 00:16:31.266 { 00:16:31.266 "name": null, 00:16:31.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.266 "is_configured": false, 00:16:31.266 "data_offset": 2048, 00:16:31.266 "data_size": 63488 00:16:31.266 }, 00:16:31.266 { 00:16:31.266 "name": "BaseBdev3", 00:16:31.266 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:31.266 "is_configured": true, 00:16:31.266 "data_offset": 2048, 00:16:31.266 "data_size": 63488 00:16:31.266 }, 00:16:31.266 { 00:16:31.266 "name": "BaseBdev4", 00:16:31.266 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:31.266 "is_configured": true, 00:16:31.266 "data_offset": 2048, 00:16:31.266 "data_size": 63488 00:16:31.266 } 00:16:31.266 ] 00:16:31.266 }' 00:16:31.266 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.266 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.834 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:31.834 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.834 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:31.834 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:31.834 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.834 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.834 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.834 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.834 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.834 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.834 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.834 "name": "raid_bdev1", 00:16:31.834 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:31.834 "strip_size_kb": 0, 00:16:31.834 "state": "online", 00:16:31.834 "raid_level": "raid1", 00:16:31.834 "superblock": true, 00:16:31.834 "num_base_bdevs": 4, 00:16:31.834 "num_base_bdevs_discovered": 2, 00:16:31.834 "num_base_bdevs_operational": 2, 00:16:31.834 "base_bdevs_list": [ 00:16:31.834 { 00:16:31.834 "name": null, 00:16:31.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.834 "is_configured": false, 00:16:31.834 "data_offset": 0, 00:16:31.834 "data_size": 63488 00:16:31.834 }, 00:16:31.834 { 00:16:31.834 "name": null, 00:16:31.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.834 "is_configured": false, 00:16:31.834 "data_offset": 2048, 00:16:31.834 "data_size": 63488 00:16:31.834 }, 00:16:31.834 { 00:16:31.834 "name": "BaseBdev3", 00:16:31.834 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:31.834 "is_configured": true, 00:16:31.834 "data_offset": 2048, 00:16:31.834 "data_size": 63488 00:16:31.834 }, 00:16:31.834 { 00:16:31.834 "name": "BaseBdev4", 00:16:31.834 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:31.834 "is_configured": true, 00:16:31.834 "data_offset": 2048, 00:16:31.834 "data_size": 63488 00:16:31.834 } 00:16:31.834 ] 00:16:31.834 }' 00:16:31.834 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.834 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:31.834 04:07:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.834 04:07:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:31.834 04:07:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:31.834 04:07:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:16:31.835 04:07:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:31.835 04:07:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:31.835 04:07:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:31.835 04:07:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:31.835 04:07:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:31.835 04:07:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:31.835 04:07:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.835 04:07:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.835 [2024-12-06 04:07:25.047850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.835 [2024-12-06 04:07:25.048077] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:31.835 [2024-12-06 04:07:25.048111] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:31.835 request: 00:16:31.835 { 00:16:31.835 "base_bdev": "BaseBdev1", 00:16:31.835 "raid_bdev": "raid_bdev1", 00:16:31.835 "method": "bdev_raid_add_base_bdev", 00:16:31.835 "req_id": 1 00:16:31.835 } 00:16:31.835 Got JSON-RPC error response 00:16:31.835 response: 00:16:31.835 { 00:16:31.835 "code": -22, 00:16:31.835 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:31.835 } 00:16:31.835 04:07:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:31.835 04:07:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:16:31.835 04:07:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:31.835 04:07:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:31.835 04:07:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:31.835 04:07:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:32.767 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:32.767 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.767 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.767 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.767 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.767 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:32.767 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.767 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.767 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.767 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.767 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.767 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.767 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.767 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.767 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.767 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.767 "name": "raid_bdev1", 00:16:32.767 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:32.767 "strip_size_kb": 0, 00:16:32.767 "state": "online", 00:16:32.767 "raid_level": "raid1", 00:16:32.767 "superblock": true, 00:16:32.767 "num_base_bdevs": 4, 00:16:32.767 "num_base_bdevs_discovered": 2, 00:16:32.767 "num_base_bdevs_operational": 2, 00:16:32.767 "base_bdevs_list": [ 00:16:32.767 { 00:16:32.767 "name": null, 00:16:32.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.767 "is_configured": false, 00:16:32.767 "data_offset": 0, 00:16:32.767 "data_size": 63488 00:16:32.767 }, 00:16:32.767 { 00:16:32.767 "name": null, 00:16:32.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.767 "is_configured": false, 00:16:32.767 "data_offset": 2048, 00:16:32.767 "data_size": 63488 00:16:32.767 }, 00:16:32.767 { 00:16:32.767 "name": "BaseBdev3", 00:16:32.767 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:32.767 "is_configured": true, 00:16:32.767 "data_offset": 2048, 00:16:32.767 "data_size": 63488 00:16:32.767 }, 00:16:32.767 { 00:16:32.767 "name": "BaseBdev4", 00:16:32.767 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:32.767 "is_configured": true, 00:16:32.767 "data_offset": 2048, 00:16:32.767 "data_size": 63488 00:16:32.767 } 00:16:32.767 ] 00:16:32.767 }' 00:16:32.767 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.767 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.334 "name": "raid_bdev1", 00:16:33.334 "uuid": "8a44c5c4-5201-411b-be96-4dfc57b387d0", 00:16:33.334 "strip_size_kb": 0, 00:16:33.334 "state": "online", 00:16:33.334 "raid_level": "raid1", 00:16:33.334 "superblock": true, 00:16:33.334 "num_base_bdevs": 4, 00:16:33.334 "num_base_bdevs_discovered": 2, 00:16:33.334 "num_base_bdevs_operational": 2, 00:16:33.334 "base_bdevs_list": [ 00:16:33.334 { 00:16:33.334 "name": null, 00:16:33.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.334 "is_configured": false, 00:16:33.334 "data_offset": 0, 00:16:33.334 "data_size": 63488 00:16:33.334 }, 00:16:33.334 { 00:16:33.334 "name": null, 00:16:33.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.334 "is_configured": false, 00:16:33.334 "data_offset": 2048, 00:16:33.334 "data_size": 63488 00:16:33.334 }, 00:16:33.334 { 00:16:33.334 "name": "BaseBdev3", 00:16:33.334 "uuid": "94c489d5-5b9e-51a0-853b-afb9f6d46968", 00:16:33.334 "is_configured": true, 00:16:33.334 "data_offset": 2048, 00:16:33.334 "data_size": 63488 00:16:33.334 }, 00:16:33.334 { 00:16:33.334 "name": "BaseBdev4", 00:16:33.334 "uuid": "a1af2076-f95a-5757-bd4e-c8624781b061", 00:16:33.334 "is_configured": true, 00:16:33.334 "data_offset": 2048, 00:16:33.334 "data_size": 63488 00:16:33.334 } 00:16:33.334 ] 00:16:33.334 }' 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79424 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79424 ']' 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79424 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79424 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:33.334 killing process with pid 79424 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79424' 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79424 00:16:33.334 Received shutdown signal, test time was about 18.614402 seconds 00:16:33.334 00:16:33.334 Latency(us) 00:16:33.334 [2024-12-06T04:07:26.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.334 [2024-12-06T04:07:26.688Z] =================================================================================================================== 00:16:33.334 [2024-12-06T04:07:26.688Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:33.334 04:07:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79424 00:16:33.334 [2024-12-06 04:07:26.573243] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:33.334 [2024-12-06 04:07:26.573395] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.334 [2024-12-06 04:07:26.573493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.334 [2024-12-06 04:07:26.573512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:33.901 [2024-12-06 04:07:27.096463] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:35.279 04:07:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:35.279 00:16:35.279 real 0m22.471s 00:16:35.279 user 0m28.666s 00:16:35.279 sys 0m2.209s 00:16:35.279 04:07:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.279 04:07:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.279 ************************************ 00:16:35.279 END TEST raid_rebuild_test_sb_io 00:16:35.279 ************************************ 00:16:35.279 04:07:28 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:35.279 04:07:28 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:16:35.279 04:07:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:35.279 04:07:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.279 04:07:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:35.279 ************************************ 00:16:35.279 START TEST raid5f_state_function_test 00:16:35.279 ************************************ 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80165 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80165' 00:16:35.279 Process raid pid: 80165 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80165 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80165 ']' 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.279 04:07:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.538 [2024-12-06 04:07:28.670304] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:16:35.538 [2024-12-06 04:07:28.670461] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.538 [2024-12-06 04:07:28.833255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.797 [2024-12-06 04:07:28.966604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.055 [2024-12-06 04:07:29.208238] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.055 [2024-12-06 04:07:29.208314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.315 [2024-12-06 04:07:29.590366] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:36.315 [2024-12-06 04:07:29.590431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:36.315 [2024-12-06 04:07:29.590444] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.315 [2024-12-06 04:07:29.590456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.315 [2024-12-06 04:07:29.590464] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:36.315 [2024-12-06 04:07:29.590474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.315 "name": "Existed_Raid", 00:16:36.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.315 "strip_size_kb": 64, 00:16:36.315 "state": "configuring", 00:16:36.315 "raid_level": "raid5f", 00:16:36.315 "superblock": false, 00:16:36.315 "num_base_bdevs": 3, 00:16:36.315 "num_base_bdevs_discovered": 0, 00:16:36.315 "num_base_bdevs_operational": 3, 00:16:36.315 "base_bdevs_list": [ 00:16:36.315 { 00:16:36.315 "name": "BaseBdev1", 00:16:36.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.315 "is_configured": false, 00:16:36.315 "data_offset": 0, 00:16:36.315 "data_size": 0 00:16:36.315 }, 00:16:36.315 { 00:16:36.315 "name": "BaseBdev2", 00:16:36.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.315 "is_configured": false, 00:16:36.315 "data_offset": 0, 00:16:36.315 "data_size": 0 00:16:36.315 }, 00:16:36.315 { 00:16:36.315 "name": "BaseBdev3", 00:16:36.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.315 "is_configured": false, 00:16:36.315 "data_offset": 0, 00:16:36.315 "data_size": 0 00:16:36.315 } 00:16:36.315 ] 00:16:36.315 }' 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.315 04:07:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.654 04:07:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:36.654 04:07:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.654 04:07:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.654 [2024-12-06 04:07:29.965784] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:36.654 [2024-12-06 04:07:29.965831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:36.654 04:07:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.654 04:07:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:36.654 04:07:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.654 04:07:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.654 [2024-12-06 04:07:29.977773] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:36.654 [2024-12-06 04:07:29.977837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:36.654 [2024-12-06 04:07:29.977860] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.654 [2024-12-06 04:07:29.977871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.654 [2024-12-06 04:07:29.977878] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:36.654 [2024-12-06 04:07:29.977905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:36.654 04:07:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.654 04:07:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:36.654 04:07:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.654 04:07:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.914 [2024-12-06 04:07:30.034961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:36.914 BaseBdev1 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.914 [ 00:16:36.914 { 00:16:36.914 "name": "BaseBdev1", 00:16:36.914 "aliases": [ 00:16:36.914 "2f540776-7e50-45be-86e5-96f4d070b7b3" 00:16:36.914 ], 00:16:36.914 "product_name": "Malloc disk", 00:16:36.914 "block_size": 512, 00:16:36.914 "num_blocks": 65536, 00:16:36.914 "uuid": "2f540776-7e50-45be-86e5-96f4d070b7b3", 00:16:36.914 "assigned_rate_limits": { 00:16:36.914 "rw_ios_per_sec": 0, 00:16:36.914 "rw_mbytes_per_sec": 0, 00:16:36.914 "r_mbytes_per_sec": 0, 00:16:36.914 "w_mbytes_per_sec": 0 00:16:36.914 }, 00:16:36.914 "claimed": true, 00:16:36.914 "claim_type": "exclusive_write", 00:16:36.914 "zoned": false, 00:16:36.914 "supported_io_types": { 00:16:36.914 "read": true, 00:16:36.914 "write": true, 00:16:36.914 "unmap": true, 00:16:36.914 "flush": true, 00:16:36.914 "reset": true, 00:16:36.914 "nvme_admin": false, 00:16:36.914 "nvme_io": false, 00:16:36.914 "nvme_io_md": false, 00:16:36.914 "write_zeroes": true, 00:16:36.914 "zcopy": true, 00:16:36.914 "get_zone_info": false, 00:16:36.914 "zone_management": false, 00:16:36.914 "zone_append": false, 00:16:36.914 "compare": false, 00:16:36.914 "compare_and_write": false, 00:16:36.914 "abort": true, 00:16:36.914 "seek_hole": false, 00:16:36.914 "seek_data": false, 00:16:36.914 "copy": true, 00:16:36.914 "nvme_iov_md": false 00:16:36.914 }, 00:16:36.914 "memory_domains": [ 00:16:36.914 { 00:16:36.914 "dma_device_id": "system", 00:16:36.914 "dma_device_type": 1 00:16:36.914 }, 00:16:36.914 { 00:16:36.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.914 "dma_device_type": 2 00:16:36.914 } 00:16:36.914 ], 00:16:36.914 "driver_specific": {} 00:16:36.914 } 00:16:36.914 ] 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.914 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.915 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.915 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.915 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.915 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.915 "name": "Existed_Raid", 00:16:36.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.915 "strip_size_kb": 64, 00:16:36.915 "state": "configuring", 00:16:36.915 "raid_level": "raid5f", 00:16:36.915 "superblock": false, 00:16:36.915 "num_base_bdevs": 3, 00:16:36.915 "num_base_bdevs_discovered": 1, 00:16:36.915 "num_base_bdevs_operational": 3, 00:16:36.915 "base_bdevs_list": [ 00:16:36.915 { 00:16:36.915 "name": "BaseBdev1", 00:16:36.915 "uuid": "2f540776-7e50-45be-86e5-96f4d070b7b3", 00:16:36.915 "is_configured": true, 00:16:36.915 "data_offset": 0, 00:16:36.915 "data_size": 65536 00:16:36.915 }, 00:16:36.915 { 00:16:36.915 "name": "BaseBdev2", 00:16:36.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.915 "is_configured": false, 00:16:36.915 "data_offset": 0, 00:16:36.915 "data_size": 0 00:16:36.915 }, 00:16:36.915 { 00:16:36.915 "name": "BaseBdev3", 00:16:36.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.915 "is_configured": false, 00:16:36.915 "data_offset": 0, 00:16:36.915 "data_size": 0 00:16:36.915 } 00:16:36.915 ] 00:16:36.915 }' 00:16:36.915 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.915 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.173 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:37.173 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.173 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.173 [2024-12-06 04:07:30.474294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:37.173 [2024-12-06 04:07:30.474363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:37.173 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.173 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:37.173 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.173 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.173 [2024-12-06 04:07:30.482350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:37.174 [2024-12-06 04:07:30.484566] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:37.174 [2024-12-06 04:07:30.484621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:37.174 [2024-12-06 04:07:30.484633] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:37.174 [2024-12-06 04:07:30.484643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:37.174 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.174 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:37.174 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:37.174 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:37.174 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.174 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.174 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.174 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.174 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:37.174 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.174 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.174 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.174 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.174 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.174 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.174 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.174 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.174 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.432 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.432 "name": "Existed_Raid", 00:16:37.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.432 "strip_size_kb": 64, 00:16:37.432 "state": "configuring", 00:16:37.432 "raid_level": "raid5f", 00:16:37.432 "superblock": false, 00:16:37.432 "num_base_bdevs": 3, 00:16:37.432 "num_base_bdevs_discovered": 1, 00:16:37.432 "num_base_bdevs_operational": 3, 00:16:37.432 "base_bdevs_list": [ 00:16:37.432 { 00:16:37.432 "name": "BaseBdev1", 00:16:37.432 "uuid": "2f540776-7e50-45be-86e5-96f4d070b7b3", 00:16:37.432 "is_configured": true, 00:16:37.432 "data_offset": 0, 00:16:37.432 "data_size": 65536 00:16:37.432 }, 00:16:37.432 { 00:16:37.432 "name": "BaseBdev2", 00:16:37.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.432 "is_configured": false, 00:16:37.432 "data_offset": 0, 00:16:37.432 "data_size": 0 00:16:37.433 }, 00:16:37.433 { 00:16:37.433 "name": "BaseBdev3", 00:16:37.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.433 "is_configured": false, 00:16:37.433 "data_offset": 0, 00:16:37.433 "data_size": 0 00:16:37.433 } 00:16:37.433 ] 00:16:37.433 }' 00:16:37.433 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.433 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.693 [2024-12-06 04:07:30.923011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.693 BaseBdev2 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.693 [ 00:16:37.693 { 00:16:37.693 "name": "BaseBdev2", 00:16:37.693 "aliases": [ 00:16:37.693 "5fb30f20-1894-4e68-85bc-653685c0597c" 00:16:37.693 ], 00:16:37.693 "product_name": "Malloc disk", 00:16:37.693 "block_size": 512, 00:16:37.693 "num_blocks": 65536, 00:16:37.693 "uuid": "5fb30f20-1894-4e68-85bc-653685c0597c", 00:16:37.693 "assigned_rate_limits": { 00:16:37.693 "rw_ios_per_sec": 0, 00:16:37.693 "rw_mbytes_per_sec": 0, 00:16:37.693 "r_mbytes_per_sec": 0, 00:16:37.693 "w_mbytes_per_sec": 0 00:16:37.693 }, 00:16:37.693 "claimed": true, 00:16:37.693 "claim_type": "exclusive_write", 00:16:37.693 "zoned": false, 00:16:37.693 "supported_io_types": { 00:16:37.693 "read": true, 00:16:37.693 "write": true, 00:16:37.693 "unmap": true, 00:16:37.693 "flush": true, 00:16:37.693 "reset": true, 00:16:37.693 "nvme_admin": false, 00:16:37.693 "nvme_io": false, 00:16:37.693 "nvme_io_md": false, 00:16:37.693 "write_zeroes": true, 00:16:37.693 "zcopy": true, 00:16:37.693 "get_zone_info": false, 00:16:37.693 "zone_management": false, 00:16:37.693 "zone_append": false, 00:16:37.693 "compare": false, 00:16:37.693 "compare_and_write": false, 00:16:37.693 "abort": true, 00:16:37.693 "seek_hole": false, 00:16:37.693 "seek_data": false, 00:16:37.693 "copy": true, 00:16:37.693 "nvme_iov_md": false 00:16:37.693 }, 00:16:37.693 "memory_domains": [ 00:16:37.693 { 00:16:37.693 "dma_device_id": "system", 00:16:37.693 "dma_device_type": 1 00:16:37.693 }, 00:16:37.693 { 00:16:37.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.693 "dma_device_type": 2 00:16:37.693 } 00:16:37.693 ], 00:16:37.693 "driver_specific": {} 00:16:37.693 } 00:16:37.693 ] 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.693 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.694 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.694 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.694 "name": "Existed_Raid", 00:16:37.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.694 "strip_size_kb": 64, 00:16:37.694 "state": "configuring", 00:16:37.694 "raid_level": "raid5f", 00:16:37.694 "superblock": false, 00:16:37.694 "num_base_bdevs": 3, 00:16:37.694 "num_base_bdevs_discovered": 2, 00:16:37.694 "num_base_bdevs_operational": 3, 00:16:37.694 "base_bdevs_list": [ 00:16:37.694 { 00:16:37.694 "name": "BaseBdev1", 00:16:37.694 "uuid": "2f540776-7e50-45be-86e5-96f4d070b7b3", 00:16:37.694 "is_configured": true, 00:16:37.694 "data_offset": 0, 00:16:37.694 "data_size": 65536 00:16:37.694 }, 00:16:37.694 { 00:16:37.694 "name": "BaseBdev2", 00:16:37.694 "uuid": "5fb30f20-1894-4e68-85bc-653685c0597c", 00:16:37.694 "is_configured": true, 00:16:37.694 "data_offset": 0, 00:16:37.694 "data_size": 65536 00:16:37.694 }, 00:16:37.694 { 00:16:37.694 "name": "BaseBdev3", 00:16:37.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.694 "is_configured": false, 00:16:37.694 "data_offset": 0, 00:16:37.694 "data_size": 0 00:16:37.694 } 00:16:37.694 ] 00:16:37.694 }' 00:16:37.694 04:07:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.694 04:07:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.264 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:38.264 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.264 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.264 [2024-12-06 04:07:31.416990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:38.264 [2024-12-06 04:07:31.417111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:38.264 [2024-12-06 04:07:31.417133] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:38.264 [2024-12-06 04:07:31.417451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:38.264 [2024-12-06 04:07:31.424211] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:38.264 [2024-12-06 04:07:31.424252] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:38.264 [2024-12-06 04:07:31.424676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.264 BaseBdev3 00:16:38.264 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.264 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:38.264 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:38.264 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:38.264 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:38.264 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:38.264 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:38.264 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:38.264 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.264 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.264 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.264 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:38.264 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.264 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.264 [ 00:16:38.264 { 00:16:38.264 "name": "BaseBdev3", 00:16:38.264 "aliases": [ 00:16:38.264 "4d4809a7-c736-4d91-b8f6-c488f34163a6" 00:16:38.264 ], 00:16:38.264 "product_name": "Malloc disk", 00:16:38.264 "block_size": 512, 00:16:38.264 "num_blocks": 65536, 00:16:38.264 "uuid": "4d4809a7-c736-4d91-b8f6-c488f34163a6", 00:16:38.264 "assigned_rate_limits": { 00:16:38.264 "rw_ios_per_sec": 0, 00:16:38.264 "rw_mbytes_per_sec": 0, 00:16:38.264 "r_mbytes_per_sec": 0, 00:16:38.264 "w_mbytes_per_sec": 0 00:16:38.264 }, 00:16:38.264 "claimed": true, 00:16:38.264 "claim_type": "exclusive_write", 00:16:38.264 "zoned": false, 00:16:38.264 "supported_io_types": { 00:16:38.264 "read": true, 00:16:38.264 "write": true, 00:16:38.264 "unmap": true, 00:16:38.264 "flush": true, 00:16:38.264 "reset": true, 00:16:38.264 "nvme_admin": false, 00:16:38.264 "nvme_io": false, 00:16:38.264 "nvme_io_md": false, 00:16:38.264 "write_zeroes": true, 00:16:38.264 "zcopy": true, 00:16:38.264 "get_zone_info": false, 00:16:38.264 "zone_management": false, 00:16:38.264 "zone_append": false, 00:16:38.264 "compare": false, 00:16:38.264 "compare_and_write": false, 00:16:38.264 "abort": true, 00:16:38.264 "seek_hole": false, 00:16:38.264 "seek_data": false, 00:16:38.264 "copy": true, 00:16:38.264 "nvme_iov_md": false 00:16:38.264 }, 00:16:38.264 "memory_domains": [ 00:16:38.264 { 00:16:38.264 "dma_device_id": "system", 00:16:38.264 "dma_device_type": 1 00:16:38.264 }, 00:16:38.264 { 00:16:38.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.264 "dma_device_type": 2 00:16:38.264 } 00:16:38.264 ], 00:16:38.264 "driver_specific": {} 00:16:38.264 } 00:16:38.264 ] 00:16:38.264 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.264 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:38.264 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:38.264 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:38.264 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:38.265 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.265 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.265 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.265 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.265 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.265 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.265 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.265 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.265 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.265 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.265 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.265 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.265 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.265 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.265 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.265 "name": "Existed_Raid", 00:16:38.265 "uuid": "c1d996af-25df-403f-aff9-96d201378194", 00:16:38.265 "strip_size_kb": 64, 00:16:38.265 "state": "online", 00:16:38.265 "raid_level": "raid5f", 00:16:38.265 "superblock": false, 00:16:38.265 "num_base_bdevs": 3, 00:16:38.265 "num_base_bdevs_discovered": 3, 00:16:38.265 "num_base_bdevs_operational": 3, 00:16:38.265 "base_bdevs_list": [ 00:16:38.265 { 00:16:38.265 "name": "BaseBdev1", 00:16:38.265 "uuid": "2f540776-7e50-45be-86e5-96f4d070b7b3", 00:16:38.265 "is_configured": true, 00:16:38.265 "data_offset": 0, 00:16:38.265 "data_size": 65536 00:16:38.265 }, 00:16:38.265 { 00:16:38.265 "name": "BaseBdev2", 00:16:38.265 "uuid": "5fb30f20-1894-4e68-85bc-653685c0597c", 00:16:38.265 "is_configured": true, 00:16:38.265 "data_offset": 0, 00:16:38.265 "data_size": 65536 00:16:38.265 }, 00:16:38.265 { 00:16:38.265 "name": "BaseBdev3", 00:16:38.265 "uuid": "4d4809a7-c736-4d91-b8f6-c488f34163a6", 00:16:38.265 "is_configured": true, 00:16:38.265 "data_offset": 0, 00:16:38.265 "data_size": 65536 00:16:38.265 } 00:16:38.265 ] 00:16:38.265 }' 00:16:38.265 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.265 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.525 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:38.525 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:38.525 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:38.525 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:38.525 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:38.525 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:38.525 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:38.525 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:38.525 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.525 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.525 [2024-12-06 04:07:31.871689] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.785 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.785 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:38.785 "name": "Existed_Raid", 00:16:38.785 "aliases": [ 00:16:38.785 "c1d996af-25df-403f-aff9-96d201378194" 00:16:38.785 ], 00:16:38.785 "product_name": "Raid Volume", 00:16:38.785 "block_size": 512, 00:16:38.785 "num_blocks": 131072, 00:16:38.785 "uuid": "c1d996af-25df-403f-aff9-96d201378194", 00:16:38.785 "assigned_rate_limits": { 00:16:38.785 "rw_ios_per_sec": 0, 00:16:38.785 "rw_mbytes_per_sec": 0, 00:16:38.785 "r_mbytes_per_sec": 0, 00:16:38.785 "w_mbytes_per_sec": 0 00:16:38.785 }, 00:16:38.785 "claimed": false, 00:16:38.785 "zoned": false, 00:16:38.785 "supported_io_types": { 00:16:38.785 "read": true, 00:16:38.785 "write": true, 00:16:38.785 "unmap": false, 00:16:38.785 "flush": false, 00:16:38.785 "reset": true, 00:16:38.785 "nvme_admin": false, 00:16:38.785 "nvme_io": false, 00:16:38.785 "nvme_io_md": false, 00:16:38.785 "write_zeroes": true, 00:16:38.785 "zcopy": false, 00:16:38.785 "get_zone_info": false, 00:16:38.785 "zone_management": false, 00:16:38.785 "zone_append": false, 00:16:38.785 "compare": false, 00:16:38.785 "compare_and_write": false, 00:16:38.785 "abort": false, 00:16:38.785 "seek_hole": false, 00:16:38.785 "seek_data": false, 00:16:38.785 "copy": false, 00:16:38.785 "nvme_iov_md": false 00:16:38.785 }, 00:16:38.785 "driver_specific": { 00:16:38.785 "raid": { 00:16:38.785 "uuid": "c1d996af-25df-403f-aff9-96d201378194", 00:16:38.785 "strip_size_kb": 64, 00:16:38.785 "state": "online", 00:16:38.785 "raid_level": "raid5f", 00:16:38.785 "superblock": false, 00:16:38.785 "num_base_bdevs": 3, 00:16:38.785 "num_base_bdevs_discovered": 3, 00:16:38.785 "num_base_bdevs_operational": 3, 00:16:38.785 "base_bdevs_list": [ 00:16:38.785 { 00:16:38.785 "name": "BaseBdev1", 00:16:38.785 "uuid": "2f540776-7e50-45be-86e5-96f4d070b7b3", 00:16:38.785 "is_configured": true, 00:16:38.785 "data_offset": 0, 00:16:38.785 "data_size": 65536 00:16:38.785 }, 00:16:38.785 { 00:16:38.785 "name": "BaseBdev2", 00:16:38.785 "uuid": "5fb30f20-1894-4e68-85bc-653685c0597c", 00:16:38.785 "is_configured": true, 00:16:38.785 "data_offset": 0, 00:16:38.785 "data_size": 65536 00:16:38.785 }, 00:16:38.785 { 00:16:38.785 "name": "BaseBdev3", 00:16:38.786 "uuid": "4d4809a7-c736-4d91-b8f6-c488f34163a6", 00:16:38.786 "is_configured": true, 00:16:38.786 "data_offset": 0, 00:16:38.786 "data_size": 65536 00:16:38.786 } 00:16:38.786 ] 00:16:38.786 } 00:16:38.786 } 00:16:38.786 }' 00:16:38.786 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:38.786 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:38.786 BaseBdev2 00:16:38.786 BaseBdev3' 00:16:38.786 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.786 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:38.786 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.786 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.786 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:38.786 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.786 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.786 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.786 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.786 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.786 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.786 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:38.786 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.786 04:07:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.786 04:07:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.786 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.786 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.786 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.786 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.786 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:38.786 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.786 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.786 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.786 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.786 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.786 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.786 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:38.786 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.786 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.786 [2024-12-06 04:07:32.079166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.054 "name": "Existed_Raid", 00:16:39.054 "uuid": "c1d996af-25df-403f-aff9-96d201378194", 00:16:39.054 "strip_size_kb": 64, 00:16:39.054 "state": "online", 00:16:39.054 "raid_level": "raid5f", 00:16:39.054 "superblock": false, 00:16:39.054 "num_base_bdevs": 3, 00:16:39.054 "num_base_bdevs_discovered": 2, 00:16:39.054 "num_base_bdevs_operational": 2, 00:16:39.054 "base_bdevs_list": [ 00:16:39.054 { 00:16:39.054 "name": null, 00:16:39.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.054 "is_configured": false, 00:16:39.054 "data_offset": 0, 00:16:39.054 "data_size": 65536 00:16:39.054 }, 00:16:39.054 { 00:16:39.054 "name": "BaseBdev2", 00:16:39.054 "uuid": "5fb30f20-1894-4e68-85bc-653685c0597c", 00:16:39.054 "is_configured": true, 00:16:39.054 "data_offset": 0, 00:16:39.054 "data_size": 65536 00:16:39.054 }, 00:16:39.054 { 00:16:39.054 "name": "BaseBdev3", 00:16:39.054 "uuid": "4d4809a7-c736-4d91-b8f6-c488f34163a6", 00:16:39.054 "is_configured": true, 00:16:39.054 "data_offset": 0, 00:16:39.054 "data_size": 65536 00:16:39.054 } 00:16:39.054 ] 00:16:39.054 }' 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.054 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.315 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:39.315 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:39.315 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:39.315 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.315 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.315 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.315 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.315 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:39.315 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:39.315 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:39.315 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.315 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.315 [2024-12-06 04:07:32.620745] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:39.315 [2024-12-06 04:07:32.620914] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:39.573 [2024-12-06 04:07:32.738483] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.574 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.574 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:39.574 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:39.574 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.574 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.574 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.574 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:39.574 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.574 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:39.574 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:39.574 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:39.574 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.574 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.574 [2024-12-06 04:07:32.782475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:39.574 [2024-12-06 04:07:32.782541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:39.574 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.574 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:39.574 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:39.574 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.574 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.574 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.574 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:39.574 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.834 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:39.834 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:39.834 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:39.834 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:39.834 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:39.834 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:39.834 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.834 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.834 BaseBdev2 00:16:39.834 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.834 04:07:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:39.834 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:39.834 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:39.834 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:39.834 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:39.834 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:39.834 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:39.834 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.834 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.834 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.834 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:39.834 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.834 04:07:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.834 [ 00:16:39.834 { 00:16:39.834 "name": "BaseBdev2", 00:16:39.834 "aliases": [ 00:16:39.834 "968a27b7-1bc3-4870-9d8d-c4578ebb6b9c" 00:16:39.834 ], 00:16:39.834 "product_name": "Malloc disk", 00:16:39.834 "block_size": 512, 00:16:39.834 "num_blocks": 65536, 00:16:39.834 "uuid": "968a27b7-1bc3-4870-9d8d-c4578ebb6b9c", 00:16:39.834 "assigned_rate_limits": { 00:16:39.834 "rw_ios_per_sec": 0, 00:16:39.834 "rw_mbytes_per_sec": 0, 00:16:39.834 "r_mbytes_per_sec": 0, 00:16:39.834 "w_mbytes_per_sec": 0 00:16:39.834 }, 00:16:39.834 "claimed": false, 00:16:39.834 "zoned": false, 00:16:39.834 "supported_io_types": { 00:16:39.834 "read": true, 00:16:39.834 "write": true, 00:16:39.834 "unmap": true, 00:16:39.834 "flush": true, 00:16:39.834 "reset": true, 00:16:39.834 "nvme_admin": false, 00:16:39.834 "nvme_io": false, 00:16:39.834 "nvme_io_md": false, 00:16:39.834 "write_zeroes": true, 00:16:39.834 "zcopy": true, 00:16:39.834 "get_zone_info": false, 00:16:39.834 "zone_management": false, 00:16:39.834 "zone_append": false, 00:16:39.834 "compare": false, 00:16:39.834 "compare_and_write": false, 00:16:39.834 "abort": true, 00:16:39.834 "seek_hole": false, 00:16:39.834 "seek_data": false, 00:16:39.834 "copy": true, 00:16:39.834 "nvme_iov_md": false 00:16:39.834 }, 00:16:39.834 "memory_domains": [ 00:16:39.834 { 00:16:39.834 "dma_device_id": "system", 00:16:39.834 "dma_device_type": 1 00:16:39.834 }, 00:16:39.834 { 00:16:39.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.834 "dma_device_type": 2 00:16:39.834 } 00:16:39.834 ], 00:16:39.834 "driver_specific": {} 00:16:39.834 } 00:16:39.834 ] 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.834 BaseBdev3 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.834 [ 00:16:39.834 { 00:16:39.834 "name": "BaseBdev3", 00:16:39.834 "aliases": [ 00:16:39.834 "b58e280b-5863-4ea4-8ebe-757e3be6da02" 00:16:39.834 ], 00:16:39.834 "product_name": "Malloc disk", 00:16:39.834 "block_size": 512, 00:16:39.834 "num_blocks": 65536, 00:16:39.834 "uuid": "b58e280b-5863-4ea4-8ebe-757e3be6da02", 00:16:39.834 "assigned_rate_limits": { 00:16:39.834 "rw_ios_per_sec": 0, 00:16:39.834 "rw_mbytes_per_sec": 0, 00:16:39.834 "r_mbytes_per_sec": 0, 00:16:39.834 "w_mbytes_per_sec": 0 00:16:39.834 }, 00:16:39.834 "claimed": false, 00:16:39.834 "zoned": false, 00:16:39.834 "supported_io_types": { 00:16:39.834 "read": true, 00:16:39.834 "write": true, 00:16:39.834 "unmap": true, 00:16:39.834 "flush": true, 00:16:39.834 "reset": true, 00:16:39.834 "nvme_admin": false, 00:16:39.834 "nvme_io": false, 00:16:39.834 "nvme_io_md": false, 00:16:39.834 "write_zeroes": true, 00:16:39.834 "zcopy": true, 00:16:39.834 "get_zone_info": false, 00:16:39.834 "zone_management": false, 00:16:39.834 "zone_append": false, 00:16:39.834 "compare": false, 00:16:39.834 "compare_and_write": false, 00:16:39.834 "abort": true, 00:16:39.834 "seek_hole": false, 00:16:39.834 "seek_data": false, 00:16:39.834 "copy": true, 00:16:39.834 "nvme_iov_md": false 00:16:39.834 }, 00:16:39.834 "memory_domains": [ 00:16:39.834 { 00:16:39.834 "dma_device_id": "system", 00:16:39.834 "dma_device_type": 1 00:16:39.834 }, 00:16:39.834 { 00:16:39.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.834 "dma_device_type": 2 00:16:39.834 } 00:16:39.834 ], 00:16:39.834 "driver_specific": {} 00:16:39.834 } 00:16:39.834 ] 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:39.834 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:39.835 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:39.835 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:39.835 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.835 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.835 [2024-12-06 04:07:33.104329] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:39.835 [2024-12-06 04:07:33.104451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:39.835 [2024-12-06 04:07:33.104491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:39.835 [2024-12-06 04:07:33.106707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:39.835 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.835 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:39.835 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.835 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:39.835 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.835 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.835 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:39.835 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.835 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.835 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.835 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.835 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.835 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.835 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.835 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.835 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.835 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.835 "name": "Existed_Raid", 00:16:39.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.835 "strip_size_kb": 64, 00:16:39.835 "state": "configuring", 00:16:39.835 "raid_level": "raid5f", 00:16:39.835 "superblock": false, 00:16:39.835 "num_base_bdevs": 3, 00:16:39.835 "num_base_bdevs_discovered": 2, 00:16:39.835 "num_base_bdevs_operational": 3, 00:16:39.835 "base_bdevs_list": [ 00:16:39.835 { 00:16:39.835 "name": "BaseBdev1", 00:16:39.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.835 "is_configured": false, 00:16:39.835 "data_offset": 0, 00:16:39.835 "data_size": 0 00:16:39.835 }, 00:16:39.835 { 00:16:39.835 "name": "BaseBdev2", 00:16:39.835 "uuid": "968a27b7-1bc3-4870-9d8d-c4578ebb6b9c", 00:16:39.835 "is_configured": true, 00:16:39.835 "data_offset": 0, 00:16:39.835 "data_size": 65536 00:16:39.835 }, 00:16:39.835 { 00:16:39.835 "name": "BaseBdev3", 00:16:39.835 "uuid": "b58e280b-5863-4ea4-8ebe-757e3be6da02", 00:16:39.835 "is_configured": true, 00:16:39.835 "data_offset": 0, 00:16:39.835 "data_size": 65536 00:16:39.835 } 00:16:39.835 ] 00:16:39.835 }' 00:16:39.835 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.835 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.403 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:40.403 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.403 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.403 [2024-12-06 04:07:33.479704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:40.403 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.403 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:40.403 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.403 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.403 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.403 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.403 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.403 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.403 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.403 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.403 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.403 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.403 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.403 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.403 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.403 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.403 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.403 "name": "Existed_Raid", 00:16:40.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.403 "strip_size_kb": 64, 00:16:40.403 "state": "configuring", 00:16:40.403 "raid_level": "raid5f", 00:16:40.403 "superblock": false, 00:16:40.403 "num_base_bdevs": 3, 00:16:40.403 "num_base_bdevs_discovered": 1, 00:16:40.403 "num_base_bdevs_operational": 3, 00:16:40.403 "base_bdevs_list": [ 00:16:40.403 { 00:16:40.403 "name": "BaseBdev1", 00:16:40.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.403 "is_configured": false, 00:16:40.403 "data_offset": 0, 00:16:40.403 "data_size": 0 00:16:40.403 }, 00:16:40.403 { 00:16:40.403 "name": null, 00:16:40.403 "uuid": "968a27b7-1bc3-4870-9d8d-c4578ebb6b9c", 00:16:40.403 "is_configured": false, 00:16:40.403 "data_offset": 0, 00:16:40.403 "data_size": 65536 00:16:40.403 }, 00:16:40.403 { 00:16:40.403 "name": "BaseBdev3", 00:16:40.403 "uuid": "b58e280b-5863-4ea4-8ebe-757e3be6da02", 00:16:40.403 "is_configured": true, 00:16:40.403 "data_offset": 0, 00:16:40.403 "data_size": 65536 00:16:40.403 } 00:16:40.403 ] 00:16:40.403 }' 00:16:40.403 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.403 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.664 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:40.664 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.664 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.664 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.664 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.664 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:40.664 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:40.664 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.664 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.664 [2024-12-06 04:07:33.939100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:40.664 BaseBdev1 00:16:40.664 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.664 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:40.664 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:40.664 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:40.664 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:40.664 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:40.664 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:40.664 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:40.664 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.664 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.664 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.664 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:40.664 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.664 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.664 [ 00:16:40.664 { 00:16:40.664 "name": "BaseBdev1", 00:16:40.664 "aliases": [ 00:16:40.664 "5551f367-888e-418f-bf75-bb309c56a99d" 00:16:40.664 ], 00:16:40.664 "product_name": "Malloc disk", 00:16:40.664 "block_size": 512, 00:16:40.664 "num_blocks": 65536, 00:16:40.664 "uuid": "5551f367-888e-418f-bf75-bb309c56a99d", 00:16:40.664 "assigned_rate_limits": { 00:16:40.664 "rw_ios_per_sec": 0, 00:16:40.664 "rw_mbytes_per_sec": 0, 00:16:40.664 "r_mbytes_per_sec": 0, 00:16:40.664 "w_mbytes_per_sec": 0 00:16:40.664 }, 00:16:40.664 "claimed": true, 00:16:40.664 "claim_type": "exclusive_write", 00:16:40.664 "zoned": false, 00:16:40.665 "supported_io_types": { 00:16:40.665 "read": true, 00:16:40.665 "write": true, 00:16:40.665 "unmap": true, 00:16:40.665 "flush": true, 00:16:40.665 "reset": true, 00:16:40.665 "nvme_admin": false, 00:16:40.665 "nvme_io": false, 00:16:40.665 "nvme_io_md": false, 00:16:40.665 "write_zeroes": true, 00:16:40.665 "zcopy": true, 00:16:40.665 "get_zone_info": false, 00:16:40.665 "zone_management": false, 00:16:40.665 "zone_append": false, 00:16:40.665 "compare": false, 00:16:40.665 "compare_and_write": false, 00:16:40.665 "abort": true, 00:16:40.665 "seek_hole": false, 00:16:40.665 "seek_data": false, 00:16:40.665 "copy": true, 00:16:40.665 "nvme_iov_md": false 00:16:40.665 }, 00:16:40.665 "memory_domains": [ 00:16:40.665 { 00:16:40.665 "dma_device_id": "system", 00:16:40.665 "dma_device_type": 1 00:16:40.665 }, 00:16:40.665 { 00:16:40.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.665 "dma_device_type": 2 00:16:40.665 } 00:16:40.665 ], 00:16:40.665 "driver_specific": {} 00:16:40.665 } 00:16:40.665 ] 00:16:40.665 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.665 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:40.665 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:40.665 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.665 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.665 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.665 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.665 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.665 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.665 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.665 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.665 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.665 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.665 04:07:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.665 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.665 04:07:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.665 04:07:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.665 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.665 "name": "Existed_Raid", 00:16:40.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.665 "strip_size_kb": 64, 00:16:40.665 "state": "configuring", 00:16:40.665 "raid_level": "raid5f", 00:16:40.665 "superblock": false, 00:16:40.665 "num_base_bdevs": 3, 00:16:40.665 "num_base_bdevs_discovered": 2, 00:16:40.665 "num_base_bdevs_operational": 3, 00:16:40.665 "base_bdevs_list": [ 00:16:40.665 { 00:16:40.665 "name": "BaseBdev1", 00:16:40.665 "uuid": "5551f367-888e-418f-bf75-bb309c56a99d", 00:16:40.665 "is_configured": true, 00:16:40.665 "data_offset": 0, 00:16:40.665 "data_size": 65536 00:16:40.665 }, 00:16:40.665 { 00:16:40.665 "name": null, 00:16:40.665 "uuid": "968a27b7-1bc3-4870-9d8d-c4578ebb6b9c", 00:16:40.665 "is_configured": false, 00:16:40.665 "data_offset": 0, 00:16:40.665 "data_size": 65536 00:16:40.665 }, 00:16:40.665 { 00:16:40.665 "name": "BaseBdev3", 00:16:40.665 "uuid": "b58e280b-5863-4ea4-8ebe-757e3be6da02", 00:16:40.665 "is_configured": true, 00:16:40.665 "data_offset": 0, 00:16:40.665 "data_size": 65536 00:16:40.665 } 00:16:40.665 ] 00:16:40.665 }' 00:16:40.665 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.665 04:07:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.234 [2024-12-06 04:07:34.398395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.234 "name": "Existed_Raid", 00:16:41.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.234 "strip_size_kb": 64, 00:16:41.234 "state": "configuring", 00:16:41.234 "raid_level": "raid5f", 00:16:41.234 "superblock": false, 00:16:41.234 "num_base_bdevs": 3, 00:16:41.234 "num_base_bdevs_discovered": 1, 00:16:41.234 "num_base_bdevs_operational": 3, 00:16:41.234 "base_bdevs_list": [ 00:16:41.234 { 00:16:41.234 "name": "BaseBdev1", 00:16:41.234 "uuid": "5551f367-888e-418f-bf75-bb309c56a99d", 00:16:41.234 "is_configured": true, 00:16:41.234 "data_offset": 0, 00:16:41.234 "data_size": 65536 00:16:41.234 }, 00:16:41.234 { 00:16:41.234 "name": null, 00:16:41.234 "uuid": "968a27b7-1bc3-4870-9d8d-c4578ebb6b9c", 00:16:41.234 "is_configured": false, 00:16:41.234 "data_offset": 0, 00:16:41.234 "data_size": 65536 00:16:41.234 }, 00:16:41.234 { 00:16:41.234 "name": null, 00:16:41.234 "uuid": "b58e280b-5863-4ea4-8ebe-757e3be6da02", 00:16:41.234 "is_configured": false, 00:16:41.234 "data_offset": 0, 00:16:41.234 "data_size": 65536 00:16:41.234 } 00:16:41.234 ] 00:16:41.234 }' 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.234 04:07:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.493 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:41.493 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.493 04:07:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.493 04:07:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.753 [2024-12-06 04:07:34.877674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.753 "name": "Existed_Raid", 00:16:41.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.753 "strip_size_kb": 64, 00:16:41.753 "state": "configuring", 00:16:41.753 "raid_level": "raid5f", 00:16:41.753 "superblock": false, 00:16:41.753 "num_base_bdevs": 3, 00:16:41.753 "num_base_bdevs_discovered": 2, 00:16:41.753 "num_base_bdevs_operational": 3, 00:16:41.753 "base_bdevs_list": [ 00:16:41.753 { 00:16:41.753 "name": "BaseBdev1", 00:16:41.753 "uuid": "5551f367-888e-418f-bf75-bb309c56a99d", 00:16:41.753 "is_configured": true, 00:16:41.753 "data_offset": 0, 00:16:41.753 "data_size": 65536 00:16:41.753 }, 00:16:41.753 { 00:16:41.753 "name": null, 00:16:41.753 "uuid": "968a27b7-1bc3-4870-9d8d-c4578ebb6b9c", 00:16:41.753 "is_configured": false, 00:16:41.753 "data_offset": 0, 00:16:41.753 "data_size": 65536 00:16:41.753 }, 00:16:41.753 { 00:16:41.753 "name": "BaseBdev3", 00:16:41.753 "uuid": "b58e280b-5863-4ea4-8ebe-757e3be6da02", 00:16:41.753 "is_configured": true, 00:16:41.753 "data_offset": 0, 00:16:41.753 "data_size": 65536 00:16:41.753 } 00:16:41.753 ] 00:16:41.753 }' 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.753 04:07:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.057 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.057 04:07:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.057 04:07:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.057 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:42.057 04:07:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.057 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:42.057 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:42.057 04:07:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.057 04:07:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.057 [2024-12-06 04:07:35.356914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:42.338 04:07:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.338 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:42.338 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.338 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.338 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.338 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.338 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.338 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.338 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.338 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.338 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.338 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.338 04:07:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.338 04:07:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.338 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.338 04:07:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.338 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.338 "name": "Existed_Raid", 00:16:42.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.338 "strip_size_kb": 64, 00:16:42.338 "state": "configuring", 00:16:42.338 "raid_level": "raid5f", 00:16:42.338 "superblock": false, 00:16:42.338 "num_base_bdevs": 3, 00:16:42.338 "num_base_bdevs_discovered": 1, 00:16:42.338 "num_base_bdevs_operational": 3, 00:16:42.338 "base_bdevs_list": [ 00:16:42.338 { 00:16:42.338 "name": null, 00:16:42.338 "uuid": "5551f367-888e-418f-bf75-bb309c56a99d", 00:16:42.338 "is_configured": false, 00:16:42.338 "data_offset": 0, 00:16:42.338 "data_size": 65536 00:16:42.338 }, 00:16:42.338 { 00:16:42.338 "name": null, 00:16:42.338 "uuid": "968a27b7-1bc3-4870-9d8d-c4578ebb6b9c", 00:16:42.338 "is_configured": false, 00:16:42.338 "data_offset": 0, 00:16:42.338 "data_size": 65536 00:16:42.338 }, 00:16:42.338 { 00:16:42.338 "name": "BaseBdev3", 00:16:42.338 "uuid": "b58e280b-5863-4ea4-8ebe-757e3be6da02", 00:16:42.338 "is_configured": true, 00:16:42.338 "data_offset": 0, 00:16:42.338 "data_size": 65536 00:16:42.338 } 00:16:42.338 ] 00:16:42.338 }' 00:16:42.338 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.338 04:07:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.596 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:42.597 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.597 04:07:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.597 04:07:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.597 04:07:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.597 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:42.597 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:42.597 04:07:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.597 04:07:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.597 [2024-12-06 04:07:35.943902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:42.855 04:07:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.855 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:42.855 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.855 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.855 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.855 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.855 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.855 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.855 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.855 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.855 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.855 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.855 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.855 04:07:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.855 04:07:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.855 04:07:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.855 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.855 "name": "Existed_Raid", 00:16:42.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.855 "strip_size_kb": 64, 00:16:42.855 "state": "configuring", 00:16:42.855 "raid_level": "raid5f", 00:16:42.855 "superblock": false, 00:16:42.855 "num_base_bdevs": 3, 00:16:42.855 "num_base_bdevs_discovered": 2, 00:16:42.855 "num_base_bdevs_operational": 3, 00:16:42.855 "base_bdevs_list": [ 00:16:42.855 { 00:16:42.855 "name": null, 00:16:42.855 "uuid": "5551f367-888e-418f-bf75-bb309c56a99d", 00:16:42.855 "is_configured": false, 00:16:42.855 "data_offset": 0, 00:16:42.855 "data_size": 65536 00:16:42.855 }, 00:16:42.855 { 00:16:42.855 "name": "BaseBdev2", 00:16:42.855 "uuid": "968a27b7-1bc3-4870-9d8d-c4578ebb6b9c", 00:16:42.855 "is_configured": true, 00:16:42.855 "data_offset": 0, 00:16:42.855 "data_size": 65536 00:16:42.855 }, 00:16:42.855 { 00:16:42.855 "name": "BaseBdev3", 00:16:42.855 "uuid": "b58e280b-5863-4ea4-8ebe-757e3be6da02", 00:16:42.855 "is_configured": true, 00:16:42.855 "data_offset": 0, 00:16:42.855 "data_size": 65536 00:16:42.855 } 00:16:42.855 ] 00:16:42.855 }' 00:16:42.855 04:07:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.856 04:07:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.115 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:43.115 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.115 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.115 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.115 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.115 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:43.115 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.115 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:43.115 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.115 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.115 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.115 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5551f367-888e-418f-bf75-bb309c56a99d 00:16:43.115 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.115 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.374 [2024-12-06 04:07:36.495546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:43.375 [2024-12-06 04:07:36.495610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:43.375 [2024-12-06 04:07:36.495623] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:43.375 [2024-12-06 04:07:36.495934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:43.375 [2024-12-06 04:07:36.501987] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:43.375 [2024-12-06 04:07:36.502015] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:43.375 [2024-12-06 04:07:36.502326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.375 NewBaseBdev 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.375 [ 00:16:43.375 { 00:16:43.375 "name": "NewBaseBdev", 00:16:43.375 "aliases": [ 00:16:43.375 "5551f367-888e-418f-bf75-bb309c56a99d" 00:16:43.375 ], 00:16:43.375 "product_name": "Malloc disk", 00:16:43.375 "block_size": 512, 00:16:43.375 "num_blocks": 65536, 00:16:43.375 "uuid": "5551f367-888e-418f-bf75-bb309c56a99d", 00:16:43.375 "assigned_rate_limits": { 00:16:43.375 "rw_ios_per_sec": 0, 00:16:43.375 "rw_mbytes_per_sec": 0, 00:16:43.375 "r_mbytes_per_sec": 0, 00:16:43.375 "w_mbytes_per_sec": 0 00:16:43.375 }, 00:16:43.375 "claimed": true, 00:16:43.375 "claim_type": "exclusive_write", 00:16:43.375 "zoned": false, 00:16:43.375 "supported_io_types": { 00:16:43.375 "read": true, 00:16:43.375 "write": true, 00:16:43.375 "unmap": true, 00:16:43.375 "flush": true, 00:16:43.375 "reset": true, 00:16:43.375 "nvme_admin": false, 00:16:43.375 "nvme_io": false, 00:16:43.375 "nvme_io_md": false, 00:16:43.375 "write_zeroes": true, 00:16:43.375 "zcopy": true, 00:16:43.375 "get_zone_info": false, 00:16:43.375 "zone_management": false, 00:16:43.375 "zone_append": false, 00:16:43.375 "compare": false, 00:16:43.375 "compare_and_write": false, 00:16:43.375 "abort": true, 00:16:43.375 "seek_hole": false, 00:16:43.375 "seek_data": false, 00:16:43.375 "copy": true, 00:16:43.375 "nvme_iov_md": false 00:16:43.375 }, 00:16:43.375 "memory_domains": [ 00:16:43.375 { 00:16:43.375 "dma_device_id": "system", 00:16:43.375 "dma_device_type": 1 00:16:43.375 }, 00:16:43.375 { 00:16:43.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.375 "dma_device_type": 2 00:16:43.375 } 00:16:43.375 ], 00:16:43.375 "driver_specific": {} 00:16:43.375 } 00:16:43.375 ] 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.375 "name": "Existed_Raid", 00:16:43.375 "uuid": "5c52e353-45c0-47c3-a8dd-eded794aaa64", 00:16:43.375 "strip_size_kb": 64, 00:16:43.375 "state": "online", 00:16:43.375 "raid_level": "raid5f", 00:16:43.375 "superblock": false, 00:16:43.375 "num_base_bdevs": 3, 00:16:43.375 "num_base_bdevs_discovered": 3, 00:16:43.375 "num_base_bdevs_operational": 3, 00:16:43.375 "base_bdevs_list": [ 00:16:43.375 { 00:16:43.375 "name": "NewBaseBdev", 00:16:43.375 "uuid": "5551f367-888e-418f-bf75-bb309c56a99d", 00:16:43.375 "is_configured": true, 00:16:43.375 "data_offset": 0, 00:16:43.375 "data_size": 65536 00:16:43.375 }, 00:16:43.375 { 00:16:43.375 "name": "BaseBdev2", 00:16:43.375 "uuid": "968a27b7-1bc3-4870-9d8d-c4578ebb6b9c", 00:16:43.375 "is_configured": true, 00:16:43.375 "data_offset": 0, 00:16:43.375 "data_size": 65536 00:16:43.375 }, 00:16:43.375 { 00:16:43.375 "name": "BaseBdev3", 00:16:43.375 "uuid": "b58e280b-5863-4ea4-8ebe-757e3be6da02", 00:16:43.375 "is_configured": true, 00:16:43.375 "data_offset": 0, 00:16:43.375 "data_size": 65536 00:16:43.375 } 00:16:43.375 ] 00:16:43.375 }' 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.375 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.634 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:43.634 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:43.634 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:43.634 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:43.634 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:43.634 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:43.634 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:43.634 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.634 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:43.634 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.634 [2024-12-06 04:07:36.973141] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:43.634 04:07:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.893 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:43.893 "name": "Existed_Raid", 00:16:43.893 "aliases": [ 00:16:43.893 "5c52e353-45c0-47c3-a8dd-eded794aaa64" 00:16:43.893 ], 00:16:43.893 "product_name": "Raid Volume", 00:16:43.893 "block_size": 512, 00:16:43.893 "num_blocks": 131072, 00:16:43.893 "uuid": "5c52e353-45c0-47c3-a8dd-eded794aaa64", 00:16:43.893 "assigned_rate_limits": { 00:16:43.893 "rw_ios_per_sec": 0, 00:16:43.893 "rw_mbytes_per_sec": 0, 00:16:43.893 "r_mbytes_per_sec": 0, 00:16:43.893 "w_mbytes_per_sec": 0 00:16:43.893 }, 00:16:43.893 "claimed": false, 00:16:43.893 "zoned": false, 00:16:43.893 "supported_io_types": { 00:16:43.893 "read": true, 00:16:43.893 "write": true, 00:16:43.893 "unmap": false, 00:16:43.893 "flush": false, 00:16:43.893 "reset": true, 00:16:43.893 "nvme_admin": false, 00:16:43.893 "nvme_io": false, 00:16:43.893 "nvme_io_md": false, 00:16:43.893 "write_zeroes": true, 00:16:43.893 "zcopy": false, 00:16:43.893 "get_zone_info": false, 00:16:43.893 "zone_management": false, 00:16:43.893 "zone_append": false, 00:16:43.893 "compare": false, 00:16:43.893 "compare_and_write": false, 00:16:43.893 "abort": false, 00:16:43.893 "seek_hole": false, 00:16:43.893 "seek_data": false, 00:16:43.893 "copy": false, 00:16:43.893 "nvme_iov_md": false 00:16:43.893 }, 00:16:43.893 "driver_specific": { 00:16:43.893 "raid": { 00:16:43.893 "uuid": "5c52e353-45c0-47c3-a8dd-eded794aaa64", 00:16:43.893 "strip_size_kb": 64, 00:16:43.893 "state": "online", 00:16:43.893 "raid_level": "raid5f", 00:16:43.893 "superblock": false, 00:16:43.893 "num_base_bdevs": 3, 00:16:43.893 "num_base_bdevs_discovered": 3, 00:16:43.893 "num_base_bdevs_operational": 3, 00:16:43.893 "base_bdevs_list": [ 00:16:43.893 { 00:16:43.893 "name": "NewBaseBdev", 00:16:43.893 "uuid": "5551f367-888e-418f-bf75-bb309c56a99d", 00:16:43.893 "is_configured": true, 00:16:43.893 "data_offset": 0, 00:16:43.893 "data_size": 65536 00:16:43.893 }, 00:16:43.893 { 00:16:43.893 "name": "BaseBdev2", 00:16:43.893 "uuid": "968a27b7-1bc3-4870-9d8d-c4578ebb6b9c", 00:16:43.893 "is_configured": true, 00:16:43.893 "data_offset": 0, 00:16:43.893 "data_size": 65536 00:16:43.893 }, 00:16:43.893 { 00:16:43.893 "name": "BaseBdev3", 00:16:43.893 "uuid": "b58e280b-5863-4ea4-8ebe-757e3be6da02", 00:16:43.893 "is_configured": true, 00:16:43.893 "data_offset": 0, 00:16:43.893 "data_size": 65536 00:16:43.893 } 00:16:43.893 ] 00:16:43.893 } 00:16:43.893 } 00:16:43.893 }' 00:16:43.893 04:07:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:43.893 BaseBdev2 00:16:43.893 BaseBdev3' 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.893 [2024-12-06 04:07:37.212660] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:43.893 [2024-12-06 04:07:37.212703] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:43.893 [2024-12-06 04:07:37.212806] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.893 [2024-12-06 04:07:37.213176] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.893 [2024-12-06 04:07:37.213194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80165 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80165 ']' 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80165 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.893 04:07:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80165 00:16:44.151 killing process with pid 80165 00:16:44.151 04:07:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:44.151 04:07:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:44.151 04:07:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80165' 00:16:44.151 04:07:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80165 00:16:44.151 [2024-12-06 04:07:37.248678] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:44.151 04:07:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80165 00:16:44.409 [2024-12-06 04:07:37.608307] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:45.788 ************************************ 00:16:45.788 END TEST raid5f_state_function_test 00:16:45.788 ************************************ 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:45.788 00:16:45.788 real 0m10.316s 00:16:45.788 user 0m16.111s 00:16:45.788 sys 0m1.596s 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.788 04:07:38 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:16:45.788 04:07:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:45.788 04:07:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:45.788 04:07:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:45.788 ************************************ 00:16:45.788 START TEST raid5f_state_function_test_sb 00:16:45.788 ************************************ 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80781 00:16:45.788 Process raid pid: 80781 00:16:45.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80781' 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80781 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80781 ']' 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.788 04:07:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.788 [2024-12-06 04:07:39.086487] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:16:45.788 [2024-12-06 04:07:39.086631] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.048 [2024-12-06 04:07:39.251641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.048 [2024-12-06 04:07:39.386354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.306 [2024-12-06 04:07:39.636347] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.306 [2024-12-06 04:07:39.636399] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.875 04:07:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.875 04:07:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:46.875 04:07:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:46.875 04:07:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.875 04:07:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.875 [2024-12-06 04:07:40.000748] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:46.875 [2024-12-06 04:07:40.000816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:46.875 [2024-12-06 04:07:40.000829] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:46.875 [2024-12-06 04:07:40.000841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:46.875 [2024-12-06 04:07:40.000854] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:46.875 [2024-12-06 04:07:40.000865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:46.875 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.875 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:46.875 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.875 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.875 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.875 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.875 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.875 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.875 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.875 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.875 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.875 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.875 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.875 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.875 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.875 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.875 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.875 "name": "Existed_Raid", 00:16:46.875 "uuid": "db4d2ee1-be7d-477f-89d6-1d05b904f02d", 00:16:46.875 "strip_size_kb": 64, 00:16:46.875 "state": "configuring", 00:16:46.875 "raid_level": "raid5f", 00:16:46.875 "superblock": true, 00:16:46.875 "num_base_bdevs": 3, 00:16:46.875 "num_base_bdevs_discovered": 0, 00:16:46.875 "num_base_bdevs_operational": 3, 00:16:46.875 "base_bdevs_list": [ 00:16:46.875 { 00:16:46.875 "name": "BaseBdev1", 00:16:46.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.875 "is_configured": false, 00:16:46.875 "data_offset": 0, 00:16:46.875 "data_size": 0 00:16:46.875 }, 00:16:46.875 { 00:16:46.875 "name": "BaseBdev2", 00:16:46.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.875 "is_configured": false, 00:16:46.875 "data_offset": 0, 00:16:46.875 "data_size": 0 00:16:46.875 }, 00:16:46.875 { 00:16:46.875 "name": "BaseBdev3", 00:16:46.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.875 "is_configured": false, 00:16:46.875 "data_offset": 0, 00:16:46.875 "data_size": 0 00:16:46.875 } 00:16:46.875 ] 00:16:46.875 }' 00:16:46.875 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.875 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.134 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:47.134 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.134 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.134 [2024-12-06 04:07:40.455918] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:47.134 [2024-12-06 04:07:40.456062] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:47.134 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.134 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:47.134 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.134 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.134 [2024-12-06 04:07:40.467926] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:47.134 [2024-12-06 04:07:40.467991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:47.134 [2024-12-06 04:07:40.468001] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:47.134 [2024-12-06 04:07:40.468010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:47.134 [2024-12-06 04:07:40.468016] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:47.134 [2024-12-06 04:07:40.468025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:47.134 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.134 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:47.134 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.134 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.393 [2024-12-06 04:07:40.520129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.393 BaseBdev1 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.393 [ 00:16:47.393 { 00:16:47.393 "name": "BaseBdev1", 00:16:47.393 "aliases": [ 00:16:47.393 "50a99201-2cdc-4407-9b6d-0aedef88106d" 00:16:47.393 ], 00:16:47.393 "product_name": "Malloc disk", 00:16:47.393 "block_size": 512, 00:16:47.393 "num_blocks": 65536, 00:16:47.393 "uuid": "50a99201-2cdc-4407-9b6d-0aedef88106d", 00:16:47.393 "assigned_rate_limits": { 00:16:47.393 "rw_ios_per_sec": 0, 00:16:47.393 "rw_mbytes_per_sec": 0, 00:16:47.393 "r_mbytes_per_sec": 0, 00:16:47.393 "w_mbytes_per_sec": 0 00:16:47.393 }, 00:16:47.393 "claimed": true, 00:16:47.393 "claim_type": "exclusive_write", 00:16:47.393 "zoned": false, 00:16:47.393 "supported_io_types": { 00:16:47.393 "read": true, 00:16:47.393 "write": true, 00:16:47.393 "unmap": true, 00:16:47.393 "flush": true, 00:16:47.393 "reset": true, 00:16:47.393 "nvme_admin": false, 00:16:47.393 "nvme_io": false, 00:16:47.393 "nvme_io_md": false, 00:16:47.393 "write_zeroes": true, 00:16:47.393 "zcopy": true, 00:16:47.393 "get_zone_info": false, 00:16:47.393 "zone_management": false, 00:16:47.393 "zone_append": false, 00:16:47.393 "compare": false, 00:16:47.393 "compare_and_write": false, 00:16:47.393 "abort": true, 00:16:47.393 "seek_hole": false, 00:16:47.393 "seek_data": false, 00:16:47.393 "copy": true, 00:16:47.393 "nvme_iov_md": false 00:16:47.393 }, 00:16:47.393 "memory_domains": [ 00:16:47.393 { 00:16:47.393 "dma_device_id": "system", 00:16:47.393 "dma_device_type": 1 00:16:47.393 }, 00:16:47.393 { 00:16:47.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.393 "dma_device_type": 2 00:16:47.393 } 00:16:47.393 ], 00:16:47.393 "driver_specific": {} 00:16:47.393 } 00:16:47.393 ] 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.393 "name": "Existed_Raid", 00:16:47.393 "uuid": "6e335ccd-83c6-4367-8677-fec694ca7d45", 00:16:47.393 "strip_size_kb": 64, 00:16:47.393 "state": "configuring", 00:16:47.393 "raid_level": "raid5f", 00:16:47.393 "superblock": true, 00:16:47.393 "num_base_bdevs": 3, 00:16:47.393 "num_base_bdevs_discovered": 1, 00:16:47.393 "num_base_bdevs_operational": 3, 00:16:47.393 "base_bdevs_list": [ 00:16:47.393 { 00:16:47.393 "name": "BaseBdev1", 00:16:47.393 "uuid": "50a99201-2cdc-4407-9b6d-0aedef88106d", 00:16:47.393 "is_configured": true, 00:16:47.393 "data_offset": 2048, 00:16:47.393 "data_size": 63488 00:16:47.393 }, 00:16:47.393 { 00:16:47.393 "name": "BaseBdev2", 00:16:47.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.393 "is_configured": false, 00:16:47.393 "data_offset": 0, 00:16:47.393 "data_size": 0 00:16:47.393 }, 00:16:47.393 { 00:16:47.393 "name": "BaseBdev3", 00:16:47.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.393 "is_configured": false, 00:16:47.393 "data_offset": 0, 00:16:47.393 "data_size": 0 00:16:47.393 } 00:16:47.393 ] 00:16:47.393 }' 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.393 04:07:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.014 [2024-12-06 04:07:41.019373] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:48.014 [2024-12-06 04:07:41.019537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.014 [2024-12-06 04:07:41.031422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:48.014 [2024-12-06 04:07:41.033685] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:48.014 [2024-12-06 04:07:41.033808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:48.014 [2024-12-06 04:07:41.033865] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:48.014 [2024-12-06 04:07:41.033902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.014 "name": "Existed_Raid", 00:16:48.014 "uuid": "a6904a8c-53ef-41fc-a637-518891ad3c35", 00:16:48.014 "strip_size_kb": 64, 00:16:48.014 "state": "configuring", 00:16:48.014 "raid_level": "raid5f", 00:16:48.014 "superblock": true, 00:16:48.014 "num_base_bdevs": 3, 00:16:48.014 "num_base_bdevs_discovered": 1, 00:16:48.014 "num_base_bdevs_operational": 3, 00:16:48.014 "base_bdevs_list": [ 00:16:48.014 { 00:16:48.014 "name": "BaseBdev1", 00:16:48.014 "uuid": "50a99201-2cdc-4407-9b6d-0aedef88106d", 00:16:48.014 "is_configured": true, 00:16:48.014 "data_offset": 2048, 00:16:48.014 "data_size": 63488 00:16:48.014 }, 00:16:48.014 { 00:16:48.014 "name": "BaseBdev2", 00:16:48.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.014 "is_configured": false, 00:16:48.014 "data_offset": 0, 00:16:48.014 "data_size": 0 00:16:48.014 }, 00:16:48.014 { 00:16:48.014 "name": "BaseBdev3", 00:16:48.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.014 "is_configured": false, 00:16:48.014 "data_offset": 0, 00:16:48.014 "data_size": 0 00:16:48.014 } 00:16:48.014 ] 00:16:48.014 }' 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.014 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.273 [2024-12-06 04:07:41.531711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:48.273 BaseBdev2 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.273 [ 00:16:48.273 { 00:16:48.273 "name": "BaseBdev2", 00:16:48.273 "aliases": [ 00:16:48.273 "4fd925af-e3d4-4725-a50d-e8bc0d613614" 00:16:48.273 ], 00:16:48.273 "product_name": "Malloc disk", 00:16:48.273 "block_size": 512, 00:16:48.273 "num_blocks": 65536, 00:16:48.273 "uuid": "4fd925af-e3d4-4725-a50d-e8bc0d613614", 00:16:48.273 "assigned_rate_limits": { 00:16:48.273 "rw_ios_per_sec": 0, 00:16:48.273 "rw_mbytes_per_sec": 0, 00:16:48.273 "r_mbytes_per_sec": 0, 00:16:48.273 "w_mbytes_per_sec": 0 00:16:48.273 }, 00:16:48.273 "claimed": true, 00:16:48.273 "claim_type": "exclusive_write", 00:16:48.273 "zoned": false, 00:16:48.273 "supported_io_types": { 00:16:48.273 "read": true, 00:16:48.273 "write": true, 00:16:48.273 "unmap": true, 00:16:48.273 "flush": true, 00:16:48.273 "reset": true, 00:16:48.273 "nvme_admin": false, 00:16:48.273 "nvme_io": false, 00:16:48.273 "nvme_io_md": false, 00:16:48.273 "write_zeroes": true, 00:16:48.273 "zcopy": true, 00:16:48.273 "get_zone_info": false, 00:16:48.273 "zone_management": false, 00:16:48.273 "zone_append": false, 00:16:48.273 "compare": false, 00:16:48.273 "compare_and_write": false, 00:16:48.273 "abort": true, 00:16:48.273 "seek_hole": false, 00:16:48.273 "seek_data": false, 00:16:48.273 "copy": true, 00:16:48.273 "nvme_iov_md": false 00:16:48.273 }, 00:16:48.273 "memory_domains": [ 00:16:48.273 { 00:16:48.273 "dma_device_id": "system", 00:16:48.273 "dma_device_type": 1 00:16:48.273 }, 00:16:48.273 { 00:16:48.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.273 "dma_device_type": 2 00:16:48.273 } 00:16:48.273 ], 00:16:48.273 "driver_specific": {} 00:16:48.273 } 00:16:48.273 ] 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.273 "name": "Existed_Raid", 00:16:48.273 "uuid": "a6904a8c-53ef-41fc-a637-518891ad3c35", 00:16:48.273 "strip_size_kb": 64, 00:16:48.273 "state": "configuring", 00:16:48.273 "raid_level": "raid5f", 00:16:48.273 "superblock": true, 00:16:48.273 "num_base_bdevs": 3, 00:16:48.273 "num_base_bdevs_discovered": 2, 00:16:48.273 "num_base_bdevs_operational": 3, 00:16:48.273 "base_bdevs_list": [ 00:16:48.273 { 00:16:48.273 "name": "BaseBdev1", 00:16:48.273 "uuid": "50a99201-2cdc-4407-9b6d-0aedef88106d", 00:16:48.273 "is_configured": true, 00:16:48.273 "data_offset": 2048, 00:16:48.273 "data_size": 63488 00:16:48.273 }, 00:16:48.273 { 00:16:48.273 "name": "BaseBdev2", 00:16:48.273 "uuid": "4fd925af-e3d4-4725-a50d-e8bc0d613614", 00:16:48.273 "is_configured": true, 00:16:48.273 "data_offset": 2048, 00:16:48.273 "data_size": 63488 00:16:48.273 }, 00:16:48.273 { 00:16:48.273 "name": "BaseBdev3", 00:16:48.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.273 "is_configured": false, 00:16:48.273 "data_offset": 0, 00:16:48.273 "data_size": 0 00:16:48.273 } 00:16:48.273 ] 00:16:48.273 }' 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.273 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.841 04:07:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:48.841 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.841 04:07:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.841 [2024-12-06 04:07:42.012855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:48.841 [2024-12-06 04:07:42.013329] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:48.841 [2024-12-06 04:07:42.013409] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:48.841 [2024-12-06 04:07:42.013793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:48.841 BaseBdev3 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.842 [2024-12-06 04:07:42.020766] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:48.842 [2024-12-06 04:07:42.020856] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:48.842 [2024-12-06 04:07:42.021290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.842 [ 00:16:48.842 { 00:16:48.842 "name": "BaseBdev3", 00:16:48.842 "aliases": [ 00:16:48.842 "ba7587f2-b2f1-4cd0-88fc-216b780acc0a" 00:16:48.842 ], 00:16:48.842 "product_name": "Malloc disk", 00:16:48.842 "block_size": 512, 00:16:48.842 "num_blocks": 65536, 00:16:48.842 "uuid": "ba7587f2-b2f1-4cd0-88fc-216b780acc0a", 00:16:48.842 "assigned_rate_limits": { 00:16:48.842 "rw_ios_per_sec": 0, 00:16:48.842 "rw_mbytes_per_sec": 0, 00:16:48.842 "r_mbytes_per_sec": 0, 00:16:48.842 "w_mbytes_per_sec": 0 00:16:48.842 }, 00:16:48.842 "claimed": true, 00:16:48.842 "claim_type": "exclusive_write", 00:16:48.842 "zoned": false, 00:16:48.842 "supported_io_types": { 00:16:48.842 "read": true, 00:16:48.842 "write": true, 00:16:48.842 "unmap": true, 00:16:48.842 "flush": true, 00:16:48.842 "reset": true, 00:16:48.842 "nvme_admin": false, 00:16:48.842 "nvme_io": false, 00:16:48.842 "nvme_io_md": false, 00:16:48.842 "write_zeroes": true, 00:16:48.842 "zcopy": true, 00:16:48.842 "get_zone_info": false, 00:16:48.842 "zone_management": false, 00:16:48.842 "zone_append": false, 00:16:48.842 "compare": false, 00:16:48.842 "compare_and_write": false, 00:16:48.842 "abort": true, 00:16:48.842 "seek_hole": false, 00:16:48.842 "seek_data": false, 00:16:48.842 "copy": true, 00:16:48.842 "nvme_iov_md": false 00:16:48.842 }, 00:16:48.842 "memory_domains": [ 00:16:48.842 { 00:16:48.842 "dma_device_id": "system", 00:16:48.842 "dma_device_type": 1 00:16:48.842 }, 00:16:48.842 { 00:16:48.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.842 "dma_device_type": 2 00:16:48.842 } 00:16:48.842 ], 00:16:48.842 "driver_specific": {} 00:16:48.842 } 00:16:48.842 ] 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.842 "name": "Existed_Raid", 00:16:48.842 "uuid": "a6904a8c-53ef-41fc-a637-518891ad3c35", 00:16:48.842 "strip_size_kb": 64, 00:16:48.842 "state": "online", 00:16:48.842 "raid_level": "raid5f", 00:16:48.842 "superblock": true, 00:16:48.842 "num_base_bdevs": 3, 00:16:48.842 "num_base_bdevs_discovered": 3, 00:16:48.842 "num_base_bdevs_operational": 3, 00:16:48.842 "base_bdevs_list": [ 00:16:48.842 { 00:16:48.842 "name": "BaseBdev1", 00:16:48.842 "uuid": "50a99201-2cdc-4407-9b6d-0aedef88106d", 00:16:48.842 "is_configured": true, 00:16:48.842 "data_offset": 2048, 00:16:48.842 "data_size": 63488 00:16:48.842 }, 00:16:48.842 { 00:16:48.842 "name": "BaseBdev2", 00:16:48.842 "uuid": "4fd925af-e3d4-4725-a50d-e8bc0d613614", 00:16:48.842 "is_configured": true, 00:16:48.842 "data_offset": 2048, 00:16:48.842 "data_size": 63488 00:16:48.842 }, 00:16:48.842 { 00:16:48.842 "name": "BaseBdev3", 00:16:48.842 "uuid": "ba7587f2-b2f1-4cd0-88fc-216b780acc0a", 00:16:48.842 "is_configured": true, 00:16:48.842 "data_offset": 2048, 00:16:48.842 "data_size": 63488 00:16:48.842 } 00:16:48.842 ] 00:16:48.842 }' 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.842 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.409 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:49.409 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:49.409 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:49.409 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:49.409 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:49.409 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:49.409 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:49.409 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:49.409 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.409 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.409 [2024-12-06 04:07:42.512747] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:49.409 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.409 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:49.409 "name": "Existed_Raid", 00:16:49.409 "aliases": [ 00:16:49.409 "a6904a8c-53ef-41fc-a637-518891ad3c35" 00:16:49.409 ], 00:16:49.409 "product_name": "Raid Volume", 00:16:49.409 "block_size": 512, 00:16:49.409 "num_blocks": 126976, 00:16:49.409 "uuid": "a6904a8c-53ef-41fc-a637-518891ad3c35", 00:16:49.409 "assigned_rate_limits": { 00:16:49.409 "rw_ios_per_sec": 0, 00:16:49.409 "rw_mbytes_per_sec": 0, 00:16:49.409 "r_mbytes_per_sec": 0, 00:16:49.409 "w_mbytes_per_sec": 0 00:16:49.409 }, 00:16:49.409 "claimed": false, 00:16:49.409 "zoned": false, 00:16:49.409 "supported_io_types": { 00:16:49.409 "read": true, 00:16:49.409 "write": true, 00:16:49.409 "unmap": false, 00:16:49.409 "flush": false, 00:16:49.409 "reset": true, 00:16:49.409 "nvme_admin": false, 00:16:49.409 "nvme_io": false, 00:16:49.409 "nvme_io_md": false, 00:16:49.409 "write_zeroes": true, 00:16:49.409 "zcopy": false, 00:16:49.409 "get_zone_info": false, 00:16:49.409 "zone_management": false, 00:16:49.409 "zone_append": false, 00:16:49.409 "compare": false, 00:16:49.409 "compare_and_write": false, 00:16:49.409 "abort": false, 00:16:49.409 "seek_hole": false, 00:16:49.409 "seek_data": false, 00:16:49.409 "copy": false, 00:16:49.409 "nvme_iov_md": false 00:16:49.409 }, 00:16:49.409 "driver_specific": { 00:16:49.409 "raid": { 00:16:49.409 "uuid": "a6904a8c-53ef-41fc-a637-518891ad3c35", 00:16:49.409 "strip_size_kb": 64, 00:16:49.409 "state": "online", 00:16:49.409 "raid_level": "raid5f", 00:16:49.409 "superblock": true, 00:16:49.409 "num_base_bdevs": 3, 00:16:49.409 "num_base_bdevs_discovered": 3, 00:16:49.409 "num_base_bdevs_operational": 3, 00:16:49.409 "base_bdevs_list": [ 00:16:49.409 { 00:16:49.409 "name": "BaseBdev1", 00:16:49.409 "uuid": "50a99201-2cdc-4407-9b6d-0aedef88106d", 00:16:49.409 "is_configured": true, 00:16:49.409 "data_offset": 2048, 00:16:49.409 "data_size": 63488 00:16:49.409 }, 00:16:49.409 { 00:16:49.409 "name": "BaseBdev2", 00:16:49.409 "uuid": "4fd925af-e3d4-4725-a50d-e8bc0d613614", 00:16:49.409 "is_configured": true, 00:16:49.409 "data_offset": 2048, 00:16:49.409 "data_size": 63488 00:16:49.409 }, 00:16:49.409 { 00:16:49.409 "name": "BaseBdev3", 00:16:49.409 "uuid": "ba7587f2-b2f1-4cd0-88fc-216b780acc0a", 00:16:49.409 "is_configured": true, 00:16:49.409 "data_offset": 2048, 00:16:49.409 "data_size": 63488 00:16:49.409 } 00:16:49.409 ] 00:16:49.409 } 00:16:49.409 } 00:16:49.409 }' 00:16:49.409 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:49.409 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:49.409 BaseBdev2 00:16:49.409 BaseBdev3' 00:16:49.409 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.409 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:49.409 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.410 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.410 [2024-12-06 04:07:42.752189] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.669 "name": "Existed_Raid", 00:16:49.669 "uuid": "a6904a8c-53ef-41fc-a637-518891ad3c35", 00:16:49.669 "strip_size_kb": 64, 00:16:49.669 "state": "online", 00:16:49.669 "raid_level": "raid5f", 00:16:49.669 "superblock": true, 00:16:49.669 "num_base_bdevs": 3, 00:16:49.669 "num_base_bdevs_discovered": 2, 00:16:49.669 "num_base_bdevs_operational": 2, 00:16:49.669 "base_bdevs_list": [ 00:16:49.669 { 00:16:49.669 "name": null, 00:16:49.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.669 "is_configured": false, 00:16:49.669 "data_offset": 0, 00:16:49.669 "data_size": 63488 00:16:49.669 }, 00:16:49.669 { 00:16:49.669 "name": "BaseBdev2", 00:16:49.669 "uuid": "4fd925af-e3d4-4725-a50d-e8bc0d613614", 00:16:49.669 "is_configured": true, 00:16:49.669 "data_offset": 2048, 00:16:49.669 "data_size": 63488 00:16:49.669 }, 00:16:49.669 { 00:16:49.669 "name": "BaseBdev3", 00:16:49.669 "uuid": "ba7587f2-b2f1-4cd0-88fc-216b780acc0a", 00:16:49.669 "is_configured": true, 00:16:49.669 "data_offset": 2048, 00:16:49.669 "data_size": 63488 00:16:49.669 } 00:16:49.669 ] 00:16:49.669 }' 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.669 04:07:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.928 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:49.928 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:49.928 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.928 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.928 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.928 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:49.928 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.187 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:50.187 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:50.187 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:50.187 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.187 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.187 [2024-12-06 04:07:43.306412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:50.187 [2024-12-06 04:07:43.306585] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:50.187 [2024-12-06 04:07:43.415324] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.187 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.187 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:50.187 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:50.187 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.187 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.187 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.187 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:50.187 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.187 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:50.187 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:50.187 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:50.187 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.187 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.187 [2024-12-06 04:07:43.475301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:50.187 [2024-12-06 04:07:43.475413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.448 BaseBdev2 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.448 [ 00:16:50.448 { 00:16:50.448 "name": "BaseBdev2", 00:16:50.448 "aliases": [ 00:16:50.448 "79561954-c13c-4bde-93a1-645cc1f7eea8" 00:16:50.448 ], 00:16:50.448 "product_name": "Malloc disk", 00:16:50.448 "block_size": 512, 00:16:50.448 "num_blocks": 65536, 00:16:50.448 "uuid": "79561954-c13c-4bde-93a1-645cc1f7eea8", 00:16:50.448 "assigned_rate_limits": { 00:16:50.448 "rw_ios_per_sec": 0, 00:16:50.448 "rw_mbytes_per_sec": 0, 00:16:50.448 "r_mbytes_per_sec": 0, 00:16:50.448 "w_mbytes_per_sec": 0 00:16:50.448 }, 00:16:50.448 "claimed": false, 00:16:50.448 "zoned": false, 00:16:50.448 "supported_io_types": { 00:16:50.448 "read": true, 00:16:50.448 "write": true, 00:16:50.448 "unmap": true, 00:16:50.448 "flush": true, 00:16:50.448 "reset": true, 00:16:50.448 "nvme_admin": false, 00:16:50.448 "nvme_io": false, 00:16:50.448 "nvme_io_md": false, 00:16:50.448 "write_zeroes": true, 00:16:50.448 "zcopy": true, 00:16:50.448 "get_zone_info": false, 00:16:50.448 "zone_management": false, 00:16:50.448 "zone_append": false, 00:16:50.448 "compare": false, 00:16:50.448 "compare_and_write": false, 00:16:50.448 "abort": true, 00:16:50.448 "seek_hole": false, 00:16:50.448 "seek_data": false, 00:16:50.448 "copy": true, 00:16:50.448 "nvme_iov_md": false 00:16:50.448 }, 00:16:50.448 "memory_domains": [ 00:16:50.448 { 00:16:50.448 "dma_device_id": "system", 00:16:50.448 "dma_device_type": 1 00:16:50.448 }, 00:16:50.448 { 00:16:50.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.448 "dma_device_type": 2 00:16:50.448 } 00:16:50.448 ], 00:16:50.448 "driver_specific": {} 00:16:50.448 } 00:16:50.448 ] 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.448 BaseBdev3 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.448 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.448 [ 00:16:50.448 { 00:16:50.448 "name": "BaseBdev3", 00:16:50.448 "aliases": [ 00:16:50.448 "57128a1e-d924-4339-9f20-7aa152ad7484" 00:16:50.448 ], 00:16:50.448 "product_name": "Malloc disk", 00:16:50.448 "block_size": 512, 00:16:50.448 "num_blocks": 65536, 00:16:50.448 "uuid": "57128a1e-d924-4339-9f20-7aa152ad7484", 00:16:50.448 "assigned_rate_limits": { 00:16:50.448 "rw_ios_per_sec": 0, 00:16:50.448 "rw_mbytes_per_sec": 0, 00:16:50.448 "r_mbytes_per_sec": 0, 00:16:50.448 "w_mbytes_per_sec": 0 00:16:50.448 }, 00:16:50.448 "claimed": false, 00:16:50.448 "zoned": false, 00:16:50.448 "supported_io_types": { 00:16:50.448 "read": true, 00:16:50.448 "write": true, 00:16:50.449 "unmap": true, 00:16:50.449 "flush": true, 00:16:50.449 "reset": true, 00:16:50.449 "nvme_admin": false, 00:16:50.449 "nvme_io": false, 00:16:50.449 "nvme_io_md": false, 00:16:50.449 "write_zeroes": true, 00:16:50.449 "zcopy": true, 00:16:50.449 "get_zone_info": false, 00:16:50.449 "zone_management": false, 00:16:50.449 "zone_append": false, 00:16:50.449 "compare": false, 00:16:50.449 "compare_and_write": false, 00:16:50.449 "abort": true, 00:16:50.449 "seek_hole": false, 00:16:50.449 "seek_data": false, 00:16:50.449 "copy": true, 00:16:50.449 "nvme_iov_md": false 00:16:50.449 }, 00:16:50.449 "memory_domains": [ 00:16:50.449 { 00:16:50.449 "dma_device_id": "system", 00:16:50.449 "dma_device_type": 1 00:16:50.449 }, 00:16:50.449 { 00:16:50.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.449 "dma_device_type": 2 00:16:50.449 } 00:16:50.709 ], 00:16:50.709 "driver_specific": {} 00:16:50.709 } 00:16:50.709 ] 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.709 [2024-12-06 04:07:43.809949] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:50.709 [2024-12-06 04:07:43.810058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:50.709 [2024-12-06 04:07:43.810123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:50.709 [2024-12-06 04:07:43.811944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.709 "name": "Existed_Raid", 00:16:50.709 "uuid": "b2e2edf5-1136-4c0f-9a23-bdc5c38b70ba", 00:16:50.709 "strip_size_kb": 64, 00:16:50.709 "state": "configuring", 00:16:50.709 "raid_level": "raid5f", 00:16:50.709 "superblock": true, 00:16:50.709 "num_base_bdevs": 3, 00:16:50.709 "num_base_bdevs_discovered": 2, 00:16:50.709 "num_base_bdevs_operational": 3, 00:16:50.709 "base_bdevs_list": [ 00:16:50.709 { 00:16:50.709 "name": "BaseBdev1", 00:16:50.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.709 "is_configured": false, 00:16:50.709 "data_offset": 0, 00:16:50.709 "data_size": 0 00:16:50.709 }, 00:16:50.709 { 00:16:50.709 "name": "BaseBdev2", 00:16:50.709 "uuid": "79561954-c13c-4bde-93a1-645cc1f7eea8", 00:16:50.709 "is_configured": true, 00:16:50.709 "data_offset": 2048, 00:16:50.709 "data_size": 63488 00:16:50.709 }, 00:16:50.709 { 00:16:50.709 "name": "BaseBdev3", 00:16:50.709 "uuid": "57128a1e-d924-4339-9f20-7aa152ad7484", 00:16:50.709 "is_configured": true, 00:16:50.709 "data_offset": 2048, 00:16:50.709 "data_size": 63488 00:16:50.709 } 00:16:50.709 ] 00:16:50.709 }' 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.709 04:07:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.968 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:50.968 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.968 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.968 [2024-12-06 04:07:44.257235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:50.968 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.968 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:50.968 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.968 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.968 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.968 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.968 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:50.968 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.968 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.968 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.968 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.968 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.968 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.968 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.968 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.968 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.968 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.968 "name": "Existed_Raid", 00:16:50.968 "uuid": "b2e2edf5-1136-4c0f-9a23-bdc5c38b70ba", 00:16:50.968 "strip_size_kb": 64, 00:16:50.968 "state": "configuring", 00:16:50.968 "raid_level": "raid5f", 00:16:50.968 "superblock": true, 00:16:50.968 "num_base_bdevs": 3, 00:16:50.968 "num_base_bdevs_discovered": 1, 00:16:50.968 "num_base_bdevs_operational": 3, 00:16:50.968 "base_bdevs_list": [ 00:16:50.968 { 00:16:50.968 "name": "BaseBdev1", 00:16:50.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.968 "is_configured": false, 00:16:50.968 "data_offset": 0, 00:16:50.968 "data_size": 0 00:16:50.968 }, 00:16:50.968 { 00:16:50.968 "name": null, 00:16:50.968 "uuid": "79561954-c13c-4bde-93a1-645cc1f7eea8", 00:16:50.968 "is_configured": false, 00:16:50.968 "data_offset": 0, 00:16:50.968 "data_size": 63488 00:16:50.968 }, 00:16:50.968 { 00:16:50.968 "name": "BaseBdev3", 00:16:50.968 "uuid": "57128a1e-d924-4339-9f20-7aa152ad7484", 00:16:50.968 "is_configured": true, 00:16:50.968 "data_offset": 2048, 00:16:50.968 "data_size": 63488 00:16:50.968 } 00:16:50.968 ] 00:16:50.968 }' 00:16:50.968 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.968 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.535 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.535 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.535 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.535 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:51.535 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.535 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:51.535 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:51.535 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.535 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.535 [2024-12-06 04:07:44.823584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:51.535 BaseBdev1 00:16:51.535 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.535 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:51.535 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:51.535 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:51.535 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:51.535 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:51.535 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:51.535 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:51.535 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.535 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.535 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.536 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:51.536 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.536 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.536 [ 00:16:51.536 { 00:16:51.536 "name": "BaseBdev1", 00:16:51.536 "aliases": [ 00:16:51.536 "b9838209-dff1-4476-8619-4d199c1bdab1" 00:16:51.536 ], 00:16:51.536 "product_name": "Malloc disk", 00:16:51.536 "block_size": 512, 00:16:51.536 "num_blocks": 65536, 00:16:51.536 "uuid": "b9838209-dff1-4476-8619-4d199c1bdab1", 00:16:51.536 "assigned_rate_limits": { 00:16:51.536 "rw_ios_per_sec": 0, 00:16:51.536 "rw_mbytes_per_sec": 0, 00:16:51.536 "r_mbytes_per_sec": 0, 00:16:51.536 "w_mbytes_per_sec": 0 00:16:51.536 }, 00:16:51.536 "claimed": true, 00:16:51.536 "claim_type": "exclusive_write", 00:16:51.536 "zoned": false, 00:16:51.536 "supported_io_types": { 00:16:51.536 "read": true, 00:16:51.536 "write": true, 00:16:51.536 "unmap": true, 00:16:51.536 "flush": true, 00:16:51.536 "reset": true, 00:16:51.536 "nvme_admin": false, 00:16:51.536 "nvme_io": false, 00:16:51.536 "nvme_io_md": false, 00:16:51.536 "write_zeroes": true, 00:16:51.536 "zcopy": true, 00:16:51.536 "get_zone_info": false, 00:16:51.536 "zone_management": false, 00:16:51.536 "zone_append": false, 00:16:51.536 "compare": false, 00:16:51.536 "compare_and_write": false, 00:16:51.536 "abort": true, 00:16:51.536 "seek_hole": false, 00:16:51.536 "seek_data": false, 00:16:51.536 "copy": true, 00:16:51.536 "nvme_iov_md": false 00:16:51.536 }, 00:16:51.536 "memory_domains": [ 00:16:51.536 { 00:16:51.536 "dma_device_id": "system", 00:16:51.536 "dma_device_type": 1 00:16:51.536 }, 00:16:51.536 { 00:16:51.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.536 "dma_device_type": 2 00:16:51.536 } 00:16:51.536 ], 00:16:51.536 "driver_specific": {} 00:16:51.536 } 00:16:51.536 ] 00:16:51.536 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.536 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:51.536 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:51.536 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.536 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.536 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.536 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.536 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.536 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.536 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.536 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.536 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.536 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.536 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.536 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.536 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.536 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.794 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.794 "name": "Existed_Raid", 00:16:51.794 "uuid": "b2e2edf5-1136-4c0f-9a23-bdc5c38b70ba", 00:16:51.794 "strip_size_kb": 64, 00:16:51.794 "state": "configuring", 00:16:51.794 "raid_level": "raid5f", 00:16:51.794 "superblock": true, 00:16:51.794 "num_base_bdevs": 3, 00:16:51.794 "num_base_bdevs_discovered": 2, 00:16:51.794 "num_base_bdevs_operational": 3, 00:16:51.794 "base_bdevs_list": [ 00:16:51.794 { 00:16:51.794 "name": "BaseBdev1", 00:16:51.794 "uuid": "b9838209-dff1-4476-8619-4d199c1bdab1", 00:16:51.794 "is_configured": true, 00:16:51.794 "data_offset": 2048, 00:16:51.794 "data_size": 63488 00:16:51.794 }, 00:16:51.794 { 00:16:51.794 "name": null, 00:16:51.794 "uuid": "79561954-c13c-4bde-93a1-645cc1f7eea8", 00:16:51.794 "is_configured": false, 00:16:51.794 "data_offset": 0, 00:16:51.794 "data_size": 63488 00:16:51.794 }, 00:16:51.794 { 00:16:51.794 "name": "BaseBdev3", 00:16:51.794 "uuid": "57128a1e-d924-4339-9f20-7aa152ad7484", 00:16:51.794 "is_configured": true, 00:16:51.794 "data_offset": 2048, 00:16:51.794 "data_size": 63488 00:16:51.794 } 00:16:51.794 ] 00:16:51.794 }' 00:16:51.795 04:07:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.795 04:07:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.051 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.051 04:07:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.051 04:07:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.051 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:52.051 04:07:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.051 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:52.051 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:52.051 04:07:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.051 04:07:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.308 [2024-12-06 04:07:45.410675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:52.308 04:07:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.308 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:52.308 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.308 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.308 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.308 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.308 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.308 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.308 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.308 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.308 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.308 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.308 04:07:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.308 04:07:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.308 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.308 04:07:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.308 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.308 "name": "Existed_Raid", 00:16:52.308 "uuid": "b2e2edf5-1136-4c0f-9a23-bdc5c38b70ba", 00:16:52.308 "strip_size_kb": 64, 00:16:52.308 "state": "configuring", 00:16:52.308 "raid_level": "raid5f", 00:16:52.308 "superblock": true, 00:16:52.308 "num_base_bdevs": 3, 00:16:52.308 "num_base_bdevs_discovered": 1, 00:16:52.308 "num_base_bdevs_operational": 3, 00:16:52.308 "base_bdevs_list": [ 00:16:52.308 { 00:16:52.308 "name": "BaseBdev1", 00:16:52.308 "uuid": "b9838209-dff1-4476-8619-4d199c1bdab1", 00:16:52.308 "is_configured": true, 00:16:52.308 "data_offset": 2048, 00:16:52.308 "data_size": 63488 00:16:52.308 }, 00:16:52.308 { 00:16:52.308 "name": null, 00:16:52.308 "uuid": "79561954-c13c-4bde-93a1-645cc1f7eea8", 00:16:52.308 "is_configured": false, 00:16:52.308 "data_offset": 0, 00:16:52.308 "data_size": 63488 00:16:52.308 }, 00:16:52.309 { 00:16:52.309 "name": null, 00:16:52.309 "uuid": "57128a1e-d924-4339-9f20-7aa152ad7484", 00:16:52.309 "is_configured": false, 00:16:52.309 "data_offset": 0, 00:16:52.309 "data_size": 63488 00:16:52.309 } 00:16:52.309 ] 00:16:52.309 }' 00:16:52.309 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.309 04:07:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.566 [2024-12-06 04:07:45.893909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.566 04:07:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.824 04:07:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.824 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.824 "name": "Existed_Raid", 00:16:52.824 "uuid": "b2e2edf5-1136-4c0f-9a23-bdc5c38b70ba", 00:16:52.824 "strip_size_kb": 64, 00:16:52.824 "state": "configuring", 00:16:52.824 "raid_level": "raid5f", 00:16:52.824 "superblock": true, 00:16:52.824 "num_base_bdevs": 3, 00:16:52.824 "num_base_bdevs_discovered": 2, 00:16:52.824 "num_base_bdevs_operational": 3, 00:16:52.824 "base_bdevs_list": [ 00:16:52.824 { 00:16:52.824 "name": "BaseBdev1", 00:16:52.824 "uuid": "b9838209-dff1-4476-8619-4d199c1bdab1", 00:16:52.824 "is_configured": true, 00:16:52.824 "data_offset": 2048, 00:16:52.824 "data_size": 63488 00:16:52.824 }, 00:16:52.824 { 00:16:52.824 "name": null, 00:16:52.824 "uuid": "79561954-c13c-4bde-93a1-645cc1f7eea8", 00:16:52.824 "is_configured": false, 00:16:52.824 "data_offset": 0, 00:16:52.824 "data_size": 63488 00:16:52.824 }, 00:16:52.824 { 00:16:52.824 "name": "BaseBdev3", 00:16:52.824 "uuid": "57128a1e-d924-4339-9f20-7aa152ad7484", 00:16:52.824 "is_configured": true, 00:16:52.824 "data_offset": 2048, 00:16:52.824 "data_size": 63488 00:16:52.824 } 00:16:52.824 ] 00:16:52.824 }' 00:16:52.824 04:07:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.824 04:07:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.111 04:07:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.111 04:07:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.111 04:07:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:53.111 04:07:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.111 04:07:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.111 04:07:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:53.111 04:07:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:53.111 04:07:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.111 04:07:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.111 [2024-12-06 04:07:46.409079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:53.369 04:07:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.369 04:07:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:53.369 04:07:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.369 04:07:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.369 04:07:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.369 04:07:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.369 04:07:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.369 04:07:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.369 04:07:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.369 04:07:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.369 04:07:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.369 04:07:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.369 04:07:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.369 04:07:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.369 04:07:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.369 04:07:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.369 04:07:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.369 "name": "Existed_Raid", 00:16:53.369 "uuid": "b2e2edf5-1136-4c0f-9a23-bdc5c38b70ba", 00:16:53.369 "strip_size_kb": 64, 00:16:53.369 "state": "configuring", 00:16:53.369 "raid_level": "raid5f", 00:16:53.369 "superblock": true, 00:16:53.369 "num_base_bdevs": 3, 00:16:53.369 "num_base_bdevs_discovered": 1, 00:16:53.369 "num_base_bdevs_operational": 3, 00:16:53.369 "base_bdevs_list": [ 00:16:53.369 { 00:16:53.369 "name": null, 00:16:53.369 "uuid": "b9838209-dff1-4476-8619-4d199c1bdab1", 00:16:53.369 "is_configured": false, 00:16:53.369 "data_offset": 0, 00:16:53.369 "data_size": 63488 00:16:53.369 }, 00:16:53.369 { 00:16:53.369 "name": null, 00:16:53.370 "uuid": "79561954-c13c-4bde-93a1-645cc1f7eea8", 00:16:53.370 "is_configured": false, 00:16:53.370 "data_offset": 0, 00:16:53.370 "data_size": 63488 00:16:53.370 }, 00:16:53.370 { 00:16:53.370 "name": "BaseBdev3", 00:16:53.370 "uuid": "57128a1e-d924-4339-9f20-7aa152ad7484", 00:16:53.370 "is_configured": true, 00:16:53.370 "data_offset": 2048, 00:16:53.370 "data_size": 63488 00:16:53.370 } 00:16:53.370 ] 00:16:53.370 }' 00:16:53.370 04:07:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.370 04:07:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.938 04:07:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:53.938 04:07:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.938 04:07:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.938 04:07:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.938 [2024-12-06 04:07:47.022289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.938 "name": "Existed_Raid", 00:16:53.938 "uuid": "b2e2edf5-1136-4c0f-9a23-bdc5c38b70ba", 00:16:53.938 "strip_size_kb": 64, 00:16:53.938 "state": "configuring", 00:16:53.938 "raid_level": "raid5f", 00:16:53.938 "superblock": true, 00:16:53.938 "num_base_bdevs": 3, 00:16:53.938 "num_base_bdevs_discovered": 2, 00:16:53.938 "num_base_bdevs_operational": 3, 00:16:53.938 "base_bdevs_list": [ 00:16:53.938 { 00:16:53.938 "name": null, 00:16:53.938 "uuid": "b9838209-dff1-4476-8619-4d199c1bdab1", 00:16:53.938 "is_configured": false, 00:16:53.938 "data_offset": 0, 00:16:53.938 "data_size": 63488 00:16:53.938 }, 00:16:53.938 { 00:16:53.938 "name": "BaseBdev2", 00:16:53.938 "uuid": "79561954-c13c-4bde-93a1-645cc1f7eea8", 00:16:53.938 "is_configured": true, 00:16:53.938 "data_offset": 2048, 00:16:53.938 "data_size": 63488 00:16:53.938 }, 00:16:53.938 { 00:16:53.938 "name": "BaseBdev3", 00:16:53.938 "uuid": "57128a1e-d924-4339-9f20-7aa152ad7484", 00:16:53.938 "is_configured": true, 00:16:53.938 "data_offset": 2048, 00:16:53.938 "data_size": 63488 00:16:53.938 } 00:16:53.938 ] 00:16:53.938 }' 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.938 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.197 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.197 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.197 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:54.197 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.197 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.197 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:54.197 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.197 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.197 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.197 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:54.197 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.197 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b9838209-dff1-4476-8619-4d199c1bdab1 00:16:54.197 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.197 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.457 [2024-12-06 04:07:47.561660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:54.457 [2024-12-06 04:07:47.561956] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:54.457 [2024-12-06 04:07:47.561976] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:54.457 [2024-12-06 04:07:47.562289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:54.457 NewBaseBdev 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.457 [2024-12-06 04:07:47.568071] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:54.457 [2024-12-06 04:07:47.568098] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:54.457 [2024-12-06 04:07:47.568406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.457 [ 00:16:54.457 { 00:16:54.457 "name": "NewBaseBdev", 00:16:54.457 "aliases": [ 00:16:54.457 "b9838209-dff1-4476-8619-4d199c1bdab1" 00:16:54.457 ], 00:16:54.457 "product_name": "Malloc disk", 00:16:54.457 "block_size": 512, 00:16:54.457 "num_blocks": 65536, 00:16:54.457 "uuid": "b9838209-dff1-4476-8619-4d199c1bdab1", 00:16:54.457 "assigned_rate_limits": { 00:16:54.457 "rw_ios_per_sec": 0, 00:16:54.457 "rw_mbytes_per_sec": 0, 00:16:54.457 "r_mbytes_per_sec": 0, 00:16:54.457 "w_mbytes_per_sec": 0 00:16:54.457 }, 00:16:54.457 "claimed": true, 00:16:54.457 "claim_type": "exclusive_write", 00:16:54.457 "zoned": false, 00:16:54.457 "supported_io_types": { 00:16:54.457 "read": true, 00:16:54.457 "write": true, 00:16:54.457 "unmap": true, 00:16:54.457 "flush": true, 00:16:54.457 "reset": true, 00:16:54.457 "nvme_admin": false, 00:16:54.457 "nvme_io": false, 00:16:54.457 "nvme_io_md": false, 00:16:54.457 "write_zeroes": true, 00:16:54.457 "zcopy": true, 00:16:54.457 "get_zone_info": false, 00:16:54.457 "zone_management": false, 00:16:54.457 "zone_append": false, 00:16:54.457 "compare": false, 00:16:54.457 "compare_and_write": false, 00:16:54.457 "abort": true, 00:16:54.457 "seek_hole": false, 00:16:54.457 "seek_data": false, 00:16:54.457 "copy": true, 00:16:54.457 "nvme_iov_md": false 00:16:54.457 }, 00:16:54.457 "memory_domains": [ 00:16:54.457 { 00:16:54.457 "dma_device_id": "system", 00:16:54.457 "dma_device_type": 1 00:16:54.457 }, 00:16:54.457 { 00:16:54.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.457 "dma_device_type": 2 00:16:54.457 } 00:16:54.457 ], 00:16:54.457 "driver_specific": {} 00:16:54.457 } 00:16:54.457 ] 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.457 "name": "Existed_Raid", 00:16:54.457 "uuid": "b2e2edf5-1136-4c0f-9a23-bdc5c38b70ba", 00:16:54.457 "strip_size_kb": 64, 00:16:54.457 "state": "online", 00:16:54.457 "raid_level": "raid5f", 00:16:54.457 "superblock": true, 00:16:54.457 "num_base_bdevs": 3, 00:16:54.457 "num_base_bdevs_discovered": 3, 00:16:54.457 "num_base_bdevs_operational": 3, 00:16:54.457 "base_bdevs_list": [ 00:16:54.457 { 00:16:54.457 "name": "NewBaseBdev", 00:16:54.457 "uuid": "b9838209-dff1-4476-8619-4d199c1bdab1", 00:16:54.457 "is_configured": true, 00:16:54.457 "data_offset": 2048, 00:16:54.457 "data_size": 63488 00:16:54.457 }, 00:16:54.457 { 00:16:54.457 "name": "BaseBdev2", 00:16:54.457 "uuid": "79561954-c13c-4bde-93a1-645cc1f7eea8", 00:16:54.457 "is_configured": true, 00:16:54.457 "data_offset": 2048, 00:16:54.457 "data_size": 63488 00:16:54.457 }, 00:16:54.457 { 00:16:54.457 "name": "BaseBdev3", 00:16:54.457 "uuid": "57128a1e-d924-4339-9f20-7aa152ad7484", 00:16:54.457 "is_configured": true, 00:16:54.457 "data_offset": 2048, 00:16:54.457 "data_size": 63488 00:16:54.457 } 00:16:54.457 ] 00:16:54.457 }' 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.457 04:07:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.717 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:54.717 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:54.717 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:54.717 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:54.717 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:54.717 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:54.717 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:54.717 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.717 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.717 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:54.717 [2024-12-06 04:07:48.050318] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.717 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.976 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:54.976 "name": "Existed_Raid", 00:16:54.976 "aliases": [ 00:16:54.976 "b2e2edf5-1136-4c0f-9a23-bdc5c38b70ba" 00:16:54.976 ], 00:16:54.976 "product_name": "Raid Volume", 00:16:54.976 "block_size": 512, 00:16:54.976 "num_blocks": 126976, 00:16:54.976 "uuid": "b2e2edf5-1136-4c0f-9a23-bdc5c38b70ba", 00:16:54.976 "assigned_rate_limits": { 00:16:54.976 "rw_ios_per_sec": 0, 00:16:54.976 "rw_mbytes_per_sec": 0, 00:16:54.976 "r_mbytes_per_sec": 0, 00:16:54.976 "w_mbytes_per_sec": 0 00:16:54.976 }, 00:16:54.976 "claimed": false, 00:16:54.976 "zoned": false, 00:16:54.976 "supported_io_types": { 00:16:54.976 "read": true, 00:16:54.976 "write": true, 00:16:54.976 "unmap": false, 00:16:54.976 "flush": false, 00:16:54.976 "reset": true, 00:16:54.976 "nvme_admin": false, 00:16:54.976 "nvme_io": false, 00:16:54.976 "nvme_io_md": false, 00:16:54.976 "write_zeroes": true, 00:16:54.976 "zcopy": false, 00:16:54.976 "get_zone_info": false, 00:16:54.976 "zone_management": false, 00:16:54.976 "zone_append": false, 00:16:54.976 "compare": false, 00:16:54.976 "compare_and_write": false, 00:16:54.976 "abort": false, 00:16:54.976 "seek_hole": false, 00:16:54.976 "seek_data": false, 00:16:54.976 "copy": false, 00:16:54.976 "nvme_iov_md": false 00:16:54.976 }, 00:16:54.976 "driver_specific": { 00:16:54.976 "raid": { 00:16:54.976 "uuid": "b2e2edf5-1136-4c0f-9a23-bdc5c38b70ba", 00:16:54.976 "strip_size_kb": 64, 00:16:54.976 "state": "online", 00:16:54.976 "raid_level": "raid5f", 00:16:54.976 "superblock": true, 00:16:54.976 "num_base_bdevs": 3, 00:16:54.976 "num_base_bdevs_discovered": 3, 00:16:54.976 "num_base_bdevs_operational": 3, 00:16:54.976 "base_bdevs_list": [ 00:16:54.976 { 00:16:54.976 "name": "NewBaseBdev", 00:16:54.976 "uuid": "b9838209-dff1-4476-8619-4d199c1bdab1", 00:16:54.976 "is_configured": true, 00:16:54.976 "data_offset": 2048, 00:16:54.976 "data_size": 63488 00:16:54.976 }, 00:16:54.976 { 00:16:54.976 "name": "BaseBdev2", 00:16:54.976 "uuid": "79561954-c13c-4bde-93a1-645cc1f7eea8", 00:16:54.977 "is_configured": true, 00:16:54.977 "data_offset": 2048, 00:16:54.977 "data_size": 63488 00:16:54.977 }, 00:16:54.977 { 00:16:54.977 "name": "BaseBdev3", 00:16:54.977 "uuid": "57128a1e-d924-4339-9f20-7aa152ad7484", 00:16:54.977 "is_configured": true, 00:16:54.977 "data_offset": 2048, 00:16:54.977 "data_size": 63488 00:16:54.977 } 00:16:54.977 ] 00:16:54.977 } 00:16:54.977 } 00:16:54.977 }' 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:54.977 BaseBdev2 00:16:54.977 BaseBdev3' 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.977 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.977 [2024-12-06 04:07:48.325624] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:54.977 [2024-12-06 04:07:48.325656] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.977 [2024-12-06 04:07:48.325742] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.977 [2024-12-06 04:07:48.326028] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.977 [2024-12-06 04:07:48.326061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:55.236 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.237 04:07:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80781 00:16:55.237 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80781 ']' 00:16:55.237 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80781 00:16:55.237 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:55.237 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.237 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80781 00:16:55.237 killing process with pid 80781 00:16:55.237 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:55.237 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:55.237 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80781' 00:16:55.237 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80781 00:16:55.237 [2024-12-06 04:07:48.367783] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:55.237 04:07:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80781 00:16:55.496 [2024-12-06 04:07:48.670169] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:56.874 04:07:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:56.874 00:16:56.874 real 0m10.846s 00:16:56.874 user 0m17.152s 00:16:56.874 sys 0m1.918s 00:16:56.874 ************************************ 00:16:56.874 END TEST raid5f_state_function_test_sb 00:16:56.874 ************************************ 00:16:56.874 04:07:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.874 04:07:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.874 04:07:49 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:16:56.874 04:07:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:56.874 04:07:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.874 04:07:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:56.874 ************************************ 00:16:56.874 START TEST raid5f_superblock_test 00:16:56.874 ************************************ 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81402 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81402 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81402 ']' 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.874 04:07:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.874 [2024-12-06 04:07:49.981817] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:16:56.874 [2024-12-06 04:07:49.981991] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81402 ] 00:16:56.874 [2024-12-06 04:07:50.152740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.133 [2024-12-06 04:07:50.267771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.133 [2024-12-06 04:07:50.464638] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.133 [2024-12-06 04:07:50.464807] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.702 malloc1 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.702 [2024-12-06 04:07:50.927735] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:57.702 [2024-12-06 04:07:50.927882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.702 [2024-12-06 04:07:50.927943] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:57.702 [2024-12-06 04:07:50.927982] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.702 [2024-12-06 04:07:50.930470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.702 [2024-12-06 04:07:50.930581] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:57.702 pt1 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.702 malloc2 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.702 [2024-12-06 04:07:50.988298] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:57.702 [2024-12-06 04:07:50.988383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.702 [2024-12-06 04:07:50.988416] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:57.702 [2024-12-06 04:07:50.988426] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.702 [2024-12-06 04:07:50.990920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.702 [2024-12-06 04:07:50.990978] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:57.702 pt2 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.702 04:07:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.961 malloc3 00:16:57.961 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.961 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:57.961 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.961 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.961 [2024-12-06 04:07:51.063034] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:57.961 [2024-12-06 04:07:51.063191] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.961 [2024-12-06 04:07:51.063242] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:57.961 [2024-12-06 04:07:51.063283] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.961 [2024-12-06 04:07:51.065910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.961 [2024-12-06 04:07:51.066011] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:57.961 pt3 00:16:57.961 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.961 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:57.961 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.961 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:16:57.961 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.961 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.961 [2024-12-06 04:07:51.075134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:57.962 [2024-12-06 04:07:51.077524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:57.962 [2024-12-06 04:07:51.077700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:57.962 [2024-12-06 04:07:51.077985] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:57.962 [2024-12-06 04:07:51.078075] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:57.962 [2024-12-06 04:07:51.078445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:57.962 [2024-12-06 04:07:51.085056] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:57.962 [2024-12-06 04:07:51.085157] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:57.962 [2024-12-06 04:07:51.085513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.962 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.962 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:57.962 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.962 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.962 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.962 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.962 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.962 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.962 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.962 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.962 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.962 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.962 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.962 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.962 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.962 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.962 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.962 "name": "raid_bdev1", 00:16:57.962 "uuid": "f226fccd-cf6c-4f42-8484-b68074a34ce0", 00:16:57.962 "strip_size_kb": 64, 00:16:57.962 "state": "online", 00:16:57.962 "raid_level": "raid5f", 00:16:57.962 "superblock": true, 00:16:57.962 "num_base_bdevs": 3, 00:16:57.962 "num_base_bdevs_discovered": 3, 00:16:57.962 "num_base_bdevs_operational": 3, 00:16:57.962 "base_bdevs_list": [ 00:16:57.962 { 00:16:57.962 "name": "pt1", 00:16:57.962 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:57.962 "is_configured": true, 00:16:57.962 "data_offset": 2048, 00:16:57.962 "data_size": 63488 00:16:57.962 }, 00:16:57.962 { 00:16:57.962 "name": "pt2", 00:16:57.962 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.962 "is_configured": true, 00:16:57.962 "data_offset": 2048, 00:16:57.962 "data_size": 63488 00:16:57.962 }, 00:16:57.962 { 00:16:57.962 "name": "pt3", 00:16:57.962 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:57.962 "is_configured": true, 00:16:57.962 "data_offset": 2048, 00:16:57.962 "data_size": 63488 00:16:57.962 } 00:16:57.962 ] 00:16:57.962 }' 00:16:57.962 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.962 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.221 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:58.221 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:58.221 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:58.221 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:58.221 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:58.221 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:58.221 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:58.221 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:58.221 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.221 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.221 [2024-12-06 04:07:51.568467] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.480 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.480 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:58.480 "name": "raid_bdev1", 00:16:58.480 "aliases": [ 00:16:58.480 "f226fccd-cf6c-4f42-8484-b68074a34ce0" 00:16:58.480 ], 00:16:58.480 "product_name": "Raid Volume", 00:16:58.480 "block_size": 512, 00:16:58.480 "num_blocks": 126976, 00:16:58.480 "uuid": "f226fccd-cf6c-4f42-8484-b68074a34ce0", 00:16:58.480 "assigned_rate_limits": { 00:16:58.480 "rw_ios_per_sec": 0, 00:16:58.480 "rw_mbytes_per_sec": 0, 00:16:58.480 "r_mbytes_per_sec": 0, 00:16:58.480 "w_mbytes_per_sec": 0 00:16:58.480 }, 00:16:58.480 "claimed": false, 00:16:58.480 "zoned": false, 00:16:58.480 "supported_io_types": { 00:16:58.480 "read": true, 00:16:58.480 "write": true, 00:16:58.480 "unmap": false, 00:16:58.480 "flush": false, 00:16:58.480 "reset": true, 00:16:58.480 "nvme_admin": false, 00:16:58.480 "nvme_io": false, 00:16:58.480 "nvme_io_md": false, 00:16:58.480 "write_zeroes": true, 00:16:58.480 "zcopy": false, 00:16:58.480 "get_zone_info": false, 00:16:58.480 "zone_management": false, 00:16:58.480 "zone_append": false, 00:16:58.480 "compare": false, 00:16:58.480 "compare_and_write": false, 00:16:58.480 "abort": false, 00:16:58.480 "seek_hole": false, 00:16:58.480 "seek_data": false, 00:16:58.480 "copy": false, 00:16:58.480 "nvme_iov_md": false 00:16:58.480 }, 00:16:58.480 "driver_specific": { 00:16:58.480 "raid": { 00:16:58.480 "uuid": "f226fccd-cf6c-4f42-8484-b68074a34ce0", 00:16:58.480 "strip_size_kb": 64, 00:16:58.480 "state": "online", 00:16:58.480 "raid_level": "raid5f", 00:16:58.480 "superblock": true, 00:16:58.480 "num_base_bdevs": 3, 00:16:58.480 "num_base_bdevs_discovered": 3, 00:16:58.480 "num_base_bdevs_operational": 3, 00:16:58.480 "base_bdevs_list": [ 00:16:58.480 { 00:16:58.480 "name": "pt1", 00:16:58.480 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.480 "is_configured": true, 00:16:58.480 "data_offset": 2048, 00:16:58.480 "data_size": 63488 00:16:58.480 }, 00:16:58.480 { 00:16:58.480 "name": "pt2", 00:16:58.480 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.480 "is_configured": true, 00:16:58.480 "data_offset": 2048, 00:16:58.480 "data_size": 63488 00:16:58.480 }, 00:16:58.480 { 00:16:58.480 "name": "pt3", 00:16:58.480 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:58.480 "is_configured": true, 00:16:58.480 "data_offset": 2048, 00:16:58.480 "data_size": 63488 00:16:58.480 } 00:16:58.480 ] 00:16:58.480 } 00:16:58.480 } 00:16:58.480 }' 00:16:58.480 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:58.480 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:58.480 pt2 00:16:58.480 pt3' 00:16:58.480 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.480 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:58.480 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.481 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:58.481 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.481 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.481 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.481 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.481 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.481 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.481 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.481 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:58.481 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.481 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.481 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.481 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.481 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.481 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.481 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.481 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:58.481 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.481 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.481 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.742 [2024-12-06 04:07:51.871892] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f226fccd-cf6c-4f42-8484-b68074a34ce0 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f226fccd-cf6c-4f42-8484-b68074a34ce0 ']' 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.742 [2024-12-06 04:07:51.915628] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.742 [2024-12-06 04:07:51.915761] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.742 [2024-12-06 04:07:51.915894] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.742 [2024-12-06 04:07:51.916019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.742 [2024-12-06 04:07:51.916091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.742 04:07:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.742 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.742 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:58.742 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.742 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:58.742 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.742 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.742 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:58.742 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:58.742 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:58.742 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:58.742 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:58.742 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.742 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:58.742 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.742 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:58.742 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.742 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.742 [2024-12-06 04:07:52.075413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:58.742 [2024-12-06 04:07:52.077643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:58.742 [2024-12-06 04:07:52.077713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:58.742 [2024-12-06 04:07:52.077774] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:58.742 [2024-12-06 04:07:52.077849] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:58.742 [2024-12-06 04:07:52.077871] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:58.742 [2024-12-06 04:07:52.077890] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.743 [2024-12-06 04:07:52.077900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:58.743 request: 00:16:58.743 { 00:16:58.743 "name": "raid_bdev1", 00:16:58.743 "raid_level": "raid5f", 00:16:58.743 "base_bdevs": [ 00:16:58.743 "malloc1", 00:16:58.743 "malloc2", 00:16:58.743 "malloc3" 00:16:58.743 ], 00:16:58.743 "strip_size_kb": 64, 00:16:58.743 "superblock": false, 00:16:58.743 "method": "bdev_raid_create", 00:16:58.743 "req_id": 1 00:16:58.743 } 00:16:58.743 Got JSON-RPC error response 00:16:58.743 response: 00:16:58.743 { 00:16:58.743 "code": -17, 00:16:58.743 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:58.743 } 00:16:58.743 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:58.743 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:58.743 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:58.743 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:58.743 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:58.743 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.743 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.743 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.743 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.033 [2024-12-06 04:07:52.143236] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:59.033 [2024-12-06 04:07:52.143409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.033 [2024-12-06 04:07:52.143454] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:59.033 [2024-12-06 04:07:52.143491] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.033 [2024-12-06 04:07:52.146127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.033 [2024-12-06 04:07:52.146239] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:59.033 [2024-12-06 04:07:52.146397] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:59.033 [2024-12-06 04:07:52.146507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:59.033 pt1 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.033 "name": "raid_bdev1", 00:16:59.033 "uuid": "f226fccd-cf6c-4f42-8484-b68074a34ce0", 00:16:59.033 "strip_size_kb": 64, 00:16:59.033 "state": "configuring", 00:16:59.033 "raid_level": "raid5f", 00:16:59.033 "superblock": true, 00:16:59.033 "num_base_bdevs": 3, 00:16:59.033 "num_base_bdevs_discovered": 1, 00:16:59.033 "num_base_bdevs_operational": 3, 00:16:59.033 "base_bdevs_list": [ 00:16:59.033 { 00:16:59.033 "name": "pt1", 00:16:59.033 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.033 "is_configured": true, 00:16:59.033 "data_offset": 2048, 00:16:59.033 "data_size": 63488 00:16:59.033 }, 00:16:59.033 { 00:16:59.033 "name": null, 00:16:59.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.033 "is_configured": false, 00:16:59.033 "data_offset": 2048, 00:16:59.033 "data_size": 63488 00:16:59.033 }, 00:16:59.033 { 00:16:59.033 "name": null, 00:16:59.033 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:59.033 "is_configured": false, 00:16:59.033 "data_offset": 2048, 00:16:59.033 "data_size": 63488 00:16:59.033 } 00:16:59.033 ] 00:16:59.033 }' 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.033 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.292 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:59.292 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:59.292 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.292 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.292 [2024-12-06 04:07:52.634391] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:59.292 [2024-12-06 04:07:52.634472] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.292 [2024-12-06 04:07:52.634497] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:59.292 [2024-12-06 04:07:52.634507] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.292 [2024-12-06 04:07:52.635037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.292 [2024-12-06 04:07:52.635079] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:59.292 [2024-12-06 04:07:52.635184] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:59.292 [2024-12-06 04:07:52.635224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:59.292 pt2 00:16:59.292 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.292 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:59.292 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.292 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.559 [2024-12-06 04:07:52.646417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:59.559 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.559 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:59.559 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.559 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:59.559 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.559 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.559 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:59.559 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.559 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.559 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.559 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.559 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.559 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.559 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.559 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.559 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.559 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.559 "name": "raid_bdev1", 00:16:59.559 "uuid": "f226fccd-cf6c-4f42-8484-b68074a34ce0", 00:16:59.559 "strip_size_kb": 64, 00:16:59.559 "state": "configuring", 00:16:59.559 "raid_level": "raid5f", 00:16:59.559 "superblock": true, 00:16:59.559 "num_base_bdevs": 3, 00:16:59.559 "num_base_bdevs_discovered": 1, 00:16:59.559 "num_base_bdevs_operational": 3, 00:16:59.559 "base_bdevs_list": [ 00:16:59.559 { 00:16:59.559 "name": "pt1", 00:16:59.559 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.559 "is_configured": true, 00:16:59.559 "data_offset": 2048, 00:16:59.559 "data_size": 63488 00:16:59.559 }, 00:16:59.559 { 00:16:59.559 "name": null, 00:16:59.559 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.559 "is_configured": false, 00:16:59.559 "data_offset": 0, 00:16:59.559 "data_size": 63488 00:16:59.559 }, 00:16:59.559 { 00:16:59.559 "name": null, 00:16:59.559 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:59.559 "is_configured": false, 00:16:59.559 "data_offset": 2048, 00:16:59.559 "data_size": 63488 00:16:59.559 } 00:16:59.559 ] 00:16:59.559 }' 00:16:59.559 04:07:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.559 04:07:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.817 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:59.817 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:59.817 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:59.817 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.817 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.817 [2024-12-06 04:07:53.089635] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:59.817 [2024-12-06 04:07:53.089813] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.817 [2024-12-06 04:07:53.089868] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:59.817 [2024-12-06 04:07:53.089908] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.817 [2024-12-06 04:07:53.090526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.817 [2024-12-06 04:07:53.090611] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:59.817 [2024-12-06 04:07:53.090720] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:59.817 [2024-12-06 04:07:53.090752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:59.817 pt2 00:16:59.817 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.817 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:59.817 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:59.817 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:59.817 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.817 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.817 [2024-12-06 04:07:53.101641] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:59.817 [2024-12-06 04:07:53.101813] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.817 [2024-12-06 04:07:53.101839] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:59.818 [2024-12-06 04:07:53.101853] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.818 [2024-12-06 04:07:53.102416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.818 [2024-12-06 04:07:53.102448] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:59.818 [2024-12-06 04:07:53.102550] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:59.818 [2024-12-06 04:07:53.102581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:59.818 [2024-12-06 04:07:53.102757] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:59.818 [2024-12-06 04:07:53.102774] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:59.818 [2024-12-06 04:07:53.103075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:59.818 [2024-12-06 04:07:53.109599] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:59.818 [2024-12-06 04:07:53.109633] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:59.818 [2024-12-06 04:07:53.109923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.818 pt3 00:16:59.818 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.818 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:59.818 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:59.818 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:59.818 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.818 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.818 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.818 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.818 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:59.818 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.818 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.818 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.818 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.818 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.818 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.818 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.818 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.818 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.818 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.818 "name": "raid_bdev1", 00:16:59.818 "uuid": "f226fccd-cf6c-4f42-8484-b68074a34ce0", 00:16:59.818 "strip_size_kb": 64, 00:16:59.818 "state": "online", 00:16:59.818 "raid_level": "raid5f", 00:16:59.818 "superblock": true, 00:16:59.818 "num_base_bdevs": 3, 00:16:59.818 "num_base_bdevs_discovered": 3, 00:16:59.818 "num_base_bdevs_operational": 3, 00:16:59.818 "base_bdevs_list": [ 00:16:59.818 { 00:16:59.818 "name": "pt1", 00:16:59.818 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.818 "is_configured": true, 00:16:59.818 "data_offset": 2048, 00:16:59.818 "data_size": 63488 00:16:59.818 }, 00:16:59.818 { 00:16:59.818 "name": "pt2", 00:16:59.818 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.818 "is_configured": true, 00:16:59.818 "data_offset": 2048, 00:16:59.818 "data_size": 63488 00:16:59.818 }, 00:16:59.818 { 00:16:59.818 "name": "pt3", 00:16:59.818 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:59.818 "is_configured": true, 00:16:59.818 "data_offset": 2048, 00:16:59.818 "data_size": 63488 00:16:59.818 } 00:16:59.818 ] 00:16:59.818 }' 00:16:59.818 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.818 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.383 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:00.383 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:00.383 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:00.383 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:00.383 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:00.383 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:00.383 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:00.383 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:00.383 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.383 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.383 [2024-12-06 04:07:53.585089] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.383 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.383 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:00.383 "name": "raid_bdev1", 00:17:00.383 "aliases": [ 00:17:00.383 "f226fccd-cf6c-4f42-8484-b68074a34ce0" 00:17:00.383 ], 00:17:00.383 "product_name": "Raid Volume", 00:17:00.383 "block_size": 512, 00:17:00.383 "num_blocks": 126976, 00:17:00.383 "uuid": "f226fccd-cf6c-4f42-8484-b68074a34ce0", 00:17:00.383 "assigned_rate_limits": { 00:17:00.383 "rw_ios_per_sec": 0, 00:17:00.383 "rw_mbytes_per_sec": 0, 00:17:00.383 "r_mbytes_per_sec": 0, 00:17:00.383 "w_mbytes_per_sec": 0 00:17:00.383 }, 00:17:00.383 "claimed": false, 00:17:00.383 "zoned": false, 00:17:00.383 "supported_io_types": { 00:17:00.383 "read": true, 00:17:00.383 "write": true, 00:17:00.383 "unmap": false, 00:17:00.383 "flush": false, 00:17:00.383 "reset": true, 00:17:00.383 "nvme_admin": false, 00:17:00.383 "nvme_io": false, 00:17:00.383 "nvme_io_md": false, 00:17:00.383 "write_zeroes": true, 00:17:00.383 "zcopy": false, 00:17:00.383 "get_zone_info": false, 00:17:00.383 "zone_management": false, 00:17:00.383 "zone_append": false, 00:17:00.384 "compare": false, 00:17:00.384 "compare_and_write": false, 00:17:00.384 "abort": false, 00:17:00.384 "seek_hole": false, 00:17:00.384 "seek_data": false, 00:17:00.384 "copy": false, 00:17:00.384 "nvme_iov_md": false 00:17:00.384 }, 00:17:00.384 "driver_specific": { 00:17:00.384 "raid": { 00:17:00.384 "uuid": "f226fccd-cf6c-4f42-8484-b68074a34ce0", 00:17:00.384 "strip_size_kb": 64, 00:17:00.384 "state": "online", 00:17:00.384 "raid_level": "raid5f", 00:17:00.384 "superblock": true, 00:17:00.384 "num_base_bdevs": 3, 00:17:00.384 "num_base_bdevs_discovered": 3, 00:17:00.384 "num_base_bdevs_operational": 3, 00:17:00.384 "base_bdevs_list": [ 00:17:00.384 { 00:17:00.384 "name": "pt1", 00:17:00.384 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:00.384 "is_configured": true, 00:17:00.384 "data_offset": 2048, 00:17:00.384 "data_size": 63488 00:17:00.384 }, 00:17:00.384 { 00:17:00.384 "name": "pt2", 00:17:00.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.384 "is_configured": true, 00:17:00.384 "data_offset": 2048, 00:17:00.384 "data_size": 63488 00:17:00.384 }, 00:17:00.384 { 00:17:00.384 "name": "pt3", 00:17:00.384 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:00.384 "is_configured": true, 00:17:00.384 "data_offset": 2048, 00:17:00.384 "data_size": 63488 00:17:00.384 } 00:17:00.384 ] 00:17:00.384 } 00:17:00.384 } 00:17:00.384 }' 00:17:00.384 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:00.384 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:00.384 pt2 00:17:00.384 pt3' 00:17:00.384 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.384 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:00.384 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:00.384 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:00.384 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.384 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.384 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.642 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.643 [2024-12-06 04:07:53.880611] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f226fccd-cf6c-4f42-8484-b68074a34ce0 '!=' f226fccd-cf6c-4f42-8484-b68074a34ce0 ']' 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.643 [2024-12-06 04:07:53.924380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.643 "name": "raid_bdev1", 00:17:00.643 "uuid": "f226fccd-cf6c-4f42-8484-b68074a34ce0", 00:17:00.643 "strip_size_kb": 64, 00:17:00.643 "state": "online", 00:17:00.643 "raid_level": "raid5f", 00:17:00.643 "superblock": true, 00:17:00.643 "num_base_bdevs": 3, 00:17:00.643 "num_base_bdevs_discovered": 2, 00:17:00.643 "num_base_bdevs_operational": 2, 00:17:00.643 "base_bdevs_list": [ 00:17:00.643 { 00:17:00.643 "name": null, 00:17:00.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.643 "is_configured": false, 00:17:00.643 "data_offset": 0, 00:17:00.643 "data_size": 63488 00:17:00.643 }, 00:17:00.643 { 00:17:00.643 "name": "pt2", 00:17:00.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.643 "is_configured": true, 00:17:00.643 "data_offset": 2048, 00:17:00.643 "data_size": 63488 00:17:00.643 }, 00:17:00.643 { 00:17:00.643 "name": "pt3", 00:17:00.643 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:00.643 "is_configured": true, 00:17:00.643 "data_offset": 2048, 00:17:00.643 "data_size": 63488 00:17:00.643 } 00:17:00.643 ] 00:17:00.643 }' 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.643 04:07:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.208 [2024-12-06 04:07:54.387536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:01.208 [2024-12-06 04:07:54.387650] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:01.208 [2024-12-06 04:07:54.387739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.208 [2024-12-06 04:07:54.387802] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:01.208 [2024-12-06 04:07:54.387817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.208 [2024-12-06 04:07:54.471339] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:01.208 [2024-12-06 04:07:54.471436] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.208 [2024-12-06 04:07:54.471470] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:01.208 [2024-12-06 04:07:54.471523] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.208 [2024-12-06 04:07:54.473819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.208 [2024-12-06 04:07:54.473899] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:01.208 [2024-12-06 04:07:54.474005] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:01.208 [2024-12-06 04:07:54.474107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:01.208 pt2 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.208 "name": "raid_bdev1", 00:17:01.208 "uuid": "f226fccd-cf6c-4f42-8484-b68074a34ce0", 00:17:01.208 "strip_size_kb": 64, 00:17:01.208 "state": "configuring", 00:17:01.208 "raid_level": "raid5f", 00:17:01.208 "superblock": true, 00:17:01.208 "num_base_bdevs": 3, 00:17:01.208 "num_base_bdevs_discovered": 1, 00:17:01.208 "num_base_bdevs_operational": 2, 00:17:01.208 "base_bdevs_list": [ 00:17:01.208 { 00:17:01.208 "name": null, 00:17:01.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.208 "is_configured": false, 00:17:01.208 "data_offset": 2048, 00:17:01.208 "data_size": 63488 00:17:01.208 }, 00:17:01.208 { 00:17:01.208 "name": "pt2", 00:17:01.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:01.208 "is_configured": true, 00:17:01.208 "data_offset": 2048, 00:17:01.208 "data_size": 63488 00:17:01.208 }, 00:17:01.208 { 00:17:01.208 "name": null, 00:17:01.208 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:01.208 "is_configured": false, 00:17:01.208 "data_offset": 2048, 00:17:01.208 "data_size": 63488 00:17:01.208 } 00:17:01.208 ] 00:17:01.208 }' 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.208 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.774 [2024-12-06 04:07:54.914623] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:01.774 [2024-12-06 04:07:54.914697] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.774 [2024-12-06 04:07:54.914720] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:01.774 [2024-12-06 04:07:54.914731] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.774 [2024-12-06 04:07:54.915281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.774 [2024-12-06 04:07:54.915316] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:01.774 [2024-12-06 04:07:54.915406] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:01.774 [2024-12-06 04:07:54.915437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:01.774 [2024-12-06 04:07:54.915582] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:01.774 [2024-12-06 04:07:54.915599] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:01.774 [2024-12-06 04:07:54.915874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:01.774 [2024-12-06 04:07:54.921171] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:01.774 pt3 00:17:01.774 [2024-12-06 04:07:54.921236] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:01.774 [2024-12-06 04:07:54.921577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.774 "name": "raid_bdev1", 00:17:01.774 "uuid": "f226fccd-cf6c-4f42-8484-b68074a34ce0", 00:17:01.774 "strip_size_kb": 64, 00:17:01.774 "state": "online", 00:17:01.774 "raid_level": "raid5f", 00:17:01.774 "superblock": true, 00:17:01.774 "num_base_bdevs": 3, 00:17:01.774 "num_base_bdevs_discovered": 2, 00:17:01.774 "num_base_bdevs_operational": 2, 00:17:01.774 "base_bdevs_list": [ 00:17:01.774 { 00:17:01.774 "name": null, 00:17:01.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.774 "is_configured": false, 00:17:01.774 "data_offset": 2048, 00:17:01.774 "data_size": 63488 00:17:01.774 }, 00:17:01.774 { 00:17:01.774 "name": "pt2", 00:17:01.774 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:01.774 "is_configured": true, 00:17:01.774 "data_offset": 2048, 00:17:01.774 "data_size": 63488 00:17:01.774 }, 00:17:01.774 { 00:17:01.774 "name": "pt3", 00:17:01.774 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:01.774 "is_configured": true, 00:17:01.774 "data_offset": 2048, 00:17:01.774 "data_size": 63488 00:17:01.774 } 00:17:01.774 ] 00:17:01.774 }' 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.774 04:07:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.031 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:02.031 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.031 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.031 [2024-12-06 04:07:55.368303] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.031 [2024-12-06 04:07:55.368348] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.031 [2024-12-06 04:07:55.368446] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.031 [2024-12-06 04:07:55.368523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.031 [2024-12-06 04:07:55.368552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:02.031 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.031 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.031 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.031 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:02.031 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.289 [2024-12-06 04:07:55.444257] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:02.289 [2024-12-06 04:07:55.444428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.289 [2024-12-06 04:07:55.444481] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:02.289 [2024-12-06 04:07:55.444549] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.289 [2024-12-06 04:07:55.447383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.289 [2024-12-06 04:07:55.447487] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:02.289 [2024-12-06 04:07:55.447647] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:02.289 [2024-12-06 04:07:55.447752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:02.289 [2024-12-06 04:07:55.447949] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:02.289 [2024-12-06 04:07:55.447965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.289 [2024-12-06 04:07:55.447987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:02.289 [2024-12-06 04:07:55.448081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:02.289 pt1 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.289 "name": "raid_bdev1", 00:17:02.289 "uuid": "f226fccd-cf6c-4f42-8484-b68074a34ce0", 00:17:02.289 "strip_size_kb": 64, 00:17:02.289 "state": "configuring", 00:17:02.289 "raid_level": "raid5f", 00:17:02.289 "superblock": true, 00:17:02.289 "num_base_bdevs": 3, 00:17:02.289 "num_base_bdevs_discovered": 1, 00:17:02.289 "num_base_bdevs_operational": 2, 00:17:02.289 "base_bdevs_list": [ 00:17:02.289 { 00:17:02.289 "name": null, 00:17:02.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.289 "is_configured": false, 00:17:02.289 "data_offset": 2048, 00:17:02.289 "data_size": 63488 00:17:02.289 }, 00:17:02.289 { 00:17:02.289 "name": "pt2", 00:17:02.289 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.289 "is_configured": true, 00:17:02.289 "data_offset": 2048, 00:17:02.289 "data_size": 63488 00:17:02.289 }, 00:17:02.289 { 00:17:02.289 "name": null, 00:17:02.289 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:02.289 "is_configured": false, 00:17:02.289 "data_offset": 2048, 00:17:02.289 "data_size": 63488 00:17:02.289 } 00:17:02.289 ] 00:17:02.289 }' 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.289 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.547 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:02.547 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.547 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.547 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:02.547 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.804 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:02.804 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:02.804 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.804 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.805 [2024-12-06 04:07:55.927528] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:02.805 [2024-12-06 04:07:55.927618] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.805 [2024-12-06 04:07:55.927643] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:02.805 [2024-12-06 04:07:55.927655] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.805 [2024-12-06 04:07:55.928252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.805 [2024-12-06 04:07:55.928283] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:02.805 [2024-12-06 04:07:55.928386] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:02.805 [2024-12-06 04:07:55.928414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:02.805 [2024-12-06 04:07:55.928581] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:02.805 [2024-12-06 04:07:55.928593] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:02.805 [2024-12-06 04:07:55.928897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:02.805 [2024-12-06 04:07:55.935895] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:02.805 [2024-12-06 04:07:55.935953] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:02.805 [2024-12-06 04:07:55.936296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.805 pt3 00:17:02.805 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.805 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:02.805 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.805 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.805 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.805 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.805 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.805 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.805 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.805 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.805 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.805 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.805 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.805 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.805 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.805 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.805 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.805 "name": "raid_bdev1", 00:17:02.805 "uuid": "f226fccd-cf6c-4f42-8484-b68074a34ce0", 00:17:02.805 "strip_size_kb": 64, 00:17:02.805 "state": "online", 00:17:02.805 "raid_level": "raid5f", 00:17:02.805 "superblock": true, 00:17:02.805 "num_base_bdevs": 3, 00:17:02.805 "num_base_bdevs_discovered": 2, 00:17:02.805 "num_base_bdevs_operational": 2, 00:17:02.805 "base_bdevs_list": [ 00:17:02.805 { 00:17:02.805 "name": null, 00:17:02.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.805 "is_configured": false, 00:17:02.805 "data_offset": 2048, 00:17:02.805 "data_size": 63488 00:17:02.805 }, 00:17:02.805 { 00:17:02.805 "name": "pt2", 00:17:02.805 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.805 "is_configured": true, 00:17:02.805 "data_offset": 2048, 00:17:02.805 "data_size": 63488 00:17:02.805 }, 00:17:02.805 { 00:17:02.805 "name": "pt3", 00:17:02.805 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:02.805 "is_configured": true, 00:17:02.805 "data_offset": 2048, 00:17:02.805 "data_size": 63488 00:17:02.805 } 00:17:02.805 ] 00:17:02.805 }' 00:17:02.805 04:07:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.805 04:07:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.062 04:07:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:03.062 04:07:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:03.062 04:07:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.062 04:07:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.325 04:07:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.325 04:07:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:03.325 04:07:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:03.325 04:07:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.325 04:07:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.325 04:07:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:03.325 [2024-12-06 04:07:56.455850] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.325 04:07:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.325 04:07:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f226fccd-cf6c-4f42-8484-b68074a34ce0 '!=' f226fccd-cf6c-4f42-8484-b68074a34ce0 ']' 00:17:03.325 04:07:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81402 00:17:03.325 04:07:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81402 ']' 00:17:03.325 04:07:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81402 00:17:03.325 04:07:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:03.326 04:07:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:03.326 04:07:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81402 00:17:03.326 04:07:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:03.326 04:07:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:03.326 04:07:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81402' 00:17:03.326 killing process with pid 81402 00:17:03.326 04:07:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81402 00:17:03.326 [2024-12-06 04:07:56.523902] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:03.326 04:07:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81402 00:17:03.326 [2024-12-06 04:07:56.524014] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:03.326 [2024-12-06 04:07:56.524116] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:03.326 [2024-12-06 04:07:56.524132] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:03.586 [2024-12-06 04:07:56.859602] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:04.962 04:07:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:04.962 00:17:04.962 real 0m8.154s 00:17:04.962 user 0m12.714s 00:17:04.962 sys 0m1.476s 00:17:04.962 ************************************ 00:17:04.962 END TEST raid5f_superblock_test 00:17:04.962 ************************************ 00:17:04.962 04:07:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:04.962 04:07:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.962 04:07:58 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:04.962 04:07:58 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:17:04.962 04:07:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:04.962 04:07:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:04.962 04:07:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:04.962 ************************************ 00:17:04.962 START TEST raid5f_rebuild_test 00:17:04.962 ************************************ 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:04.962 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:04.963 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:04.963 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81851 00:17:04.963 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:04.963 04:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81851 00:17:04.963 04:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81851 ']' 00:17:04.963 04:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.963 04:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.963 04:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.963 04:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.963 04:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.963 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:04.963 Zero copy mechanism will not be used. 00:17:04.963 [2024-12-06 04:07:58.220877] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:17:04.963 [2024-12-06 04:07:58.221000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81851 ] 00:17:05.221 [2024-12-06 04:07:58.397374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.221 [2024-12-06 04:07:58.520867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.479 [2024-12-06 04:07:58.739985] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.479 [2024-12-06 04:07:58.740019] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.047 BaseBdev1_malloc 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.047 [2024-12-06 04:07:59.150505] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:06.047 [2024-12-06 04:07:59.150625] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.047 [2024-12-06 04:07:59.150654] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:06.047 [2024-12-06 04:07:59.150668] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.047 [2024-12-06 04:07:59.153012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.047 [2024-12-06 04:07:59.153064] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:06.047 BaseBdev1 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.047 BaseBdev2_malloc 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.047 [2024-12-06 04:07:59.206980] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:06.047 [2024-12-06 04:07:59.207058] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.047 [2024-12-06 04:07:59.207083] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:06.047 [2024-12-06 04:07:59.207094] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.047 [2024-12-06 04:07:59.209265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.047 [2024-12-06 04:07:59.209306] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:06.047 BaseBdev2 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.047 BaseBdev3_malloc 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.047 [2024-12-06 04:07:59.275587] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:06.047 [2024-12-06 04:07:59.275653] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.047 [2024-12-06 04:07:59.275679] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:06.047 [2024-12-06 04:07:59.275691] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.047 [2024-12-06 04:07:59.278089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.047 [2024-12-06 04:07:59.278132] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:06.047 BaseBdev3 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.047 spare_malloc 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.047 spare_delay 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.047 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.048 [2024-12-06 04:07:59.342990] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:06.048 [2024-12-06 04:07:59.343063] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.048 [2024-12-06 04:07:59.343099] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:06.048 [2024-12-06 04:07:59.343109] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.048 [2024-12-06 04:07:59.345324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.048 [2024-12-06 04:07:59.345371] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:06.048 spare 00:17:06.048 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.048 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:06.048 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.048 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.048 [2024-12-06 04:07:59.351069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.048 [2024-12-06 04:07:59.353052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.048 [2024-12-06 04:07:59.353173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:06.048 [2024-12-06 04:07:59.353286] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:06.048 [2024-12-06 04:07:59.353299] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:06.048 [2024-12-06 04:07:59.353624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:06.048 [2024-12-06 04:07:59.360214] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:06.048 [2024-12-06 04:07:59.360281] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:06.048 [2024-12-06 04:07:59.360583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.048 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.048 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:06.048 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.048 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.048 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.048 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.048 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:06.048 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.048 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.048 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.048 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.048 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.048 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.048 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.048 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.048 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.306 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.306 "name": "raid_bdev1", 00:17:06.306 "uuid": "d259ec41-5892-4d90-8215-f32b1b3f7eeb", 00:17:06.306 "strip_size_kb": 64, 00:17:06.306 "state": "online", 00:17:06.306 "raid_level": "raid5f", 00:17:06.306 "superblock": false, 00:17:06.306 "num_base_bdevs": 3, 00:17:06.306 "num_base_bdevs_discovered": 3, 00:17:06.306 "num_base_bdevs_operational": 3, 00:17:06.306 "base_bdevs_list": [ 00:17:06.306 { 00:17:06.306 "name": "BaseBdev1", 00:17:06.306 "uuid": "8001534b-638b-5809-8b9f-0262852b5089", 00:17:06.306 "is_configured": true, 00:17:06.306 "data_offset": 0, 00:17:06.306 "data_size": 65536 00:17:06.306 }, 00:17:06.306 { 00:17:06.306 "name": "BaseBdev2", 00:17:06.306 "uuid": "730592e5-982c-5ac1-b3ea-42cf489854fe", 00:17:06.306 "is_configured": true, 00:17:06.306 "data_offset": 0, 00:17:06.306 "data_size": 65536 00:17:06.306 }, 00:17:06.306 { 00:17:06.306 "name": "BaseBdev3", 00:17:06.306 "uuid": "7bf16ff2-acec-5f24-97ba-5a3cde84ed59", 00:17:06.306 "is_configured": true, 00:17:06.306 "data_offset": 0, 00:17:06.306 "data_size": 65536 00:17:06.306 } 00:17:06.306 ] 00:17:06.306 }' 00:17:06.306 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.306 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.565 [2024-12-06 04:07:59.811665] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:06.565 04:07:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:06.823 [2024-12-06 04:08:00.111018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:06.823 /dev/nbd0 00:17:06.823 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:06.823 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:06.823 04:08:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:06.823 04:08:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:06.823 04:08:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:06.823 04:08:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:06.823 04:08:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:06.823 04:08:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:06.823 04:08:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:06.823 04:08:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:06.823 04:08:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:06.823 1+0 records in 00:17:06.823 1+0 records out 00:17:06.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272102 s, 15.1 MB/s 00:17:06.823 04:08:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.823 04:08:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:06.823 04:08:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.823 04:08:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:06.823 04:08:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:06.823 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:06.823 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:06.823 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:06.823 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:06.823 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:06.823 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:17:07.391 512+0 records in 00:17:07.391 512+0 records out 00:17:07.391 67108864 bytes (67 MB, 64 MiB) copied, 0.432666 s, 155 MB/s 00:17:07.391 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:07.391 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:07.391 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:07.391 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:07.391 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:07.391 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:07.391 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:07.651 [2024-12-06 04:08:00.836306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.651 [2024-12-06 04:08:00.853452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.651 "name": "raid_bdev1", 00:17:07.651 "uuid": "d259ec41-5892-4d90-8215-f32b1b3f7eeb", 00:17:07.651 "strip_size_kb": 64, 00:17:07.651 "state": "online", 00:17:07.651 "raid_level": "raid5f", 00:17:07.651 "superblock": false, 00:17:07.651 "num_base_bdevs": 3, 00:17:07.651 "num_base_bdevs_discovered": 2, 00:17:07.651 "num_base_bdevs_operational": 2, 00:17:07.651 "base_bdevs_list": [ 00:17:07.651 { 00:17:07.651 "name": null, 00:17:07.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.651 "is_configured": false, 00:17:07.651 "data_offset": 0, 00:17:07.651 "data_size": 65536 00:17:07.651 }, 00:17:07.651 { 00:17:07.651 "name": "BaseBdev2", 00:17:07.651 "uuid": "730592e5-982c-5ac1-b3ea-42cf489854fe", 00:17:07.651 "is_configured": true, 00:17:07.651 "data_offset": 0, 00:17:07.651 "data_size": 65536 00:17:07.651 }, 00:17:07.651 { 00:17:07.651 "name": "BaseBdev3", 00:17:07.651 "uuid": "7bf16ff2-acec-5f24-97ba-5a3cde84ed59", 00:17:07.651 "is_configured": true, 00:17:07.651 "data_offset": 0, 00:17:07.651 "data_size": 65536 00:17:07.651 } 00:17:07.651 ] 00:17:07.651 }' 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.651 04:08:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.219 04:08:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:08.219 04:08:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.219 04:08:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.219 [2024-12-06 04:08:01.292769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:08.219 [2024-12-06 04:08:01.313790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:17:08.219 04:08:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.219 04:08:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:08.219 [2024-12-06 04:08:01.324219] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:09.155 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.155 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.155 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.155 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.155 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.155 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.155 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.155 04:08:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.155 04:08:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.155 04:08:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.155 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.155 "name": "raid_bdev1", 00:17:09.155 "uuid": "d259ec41-5892-4d90-8215-f32b1b3f7eeb", 00:17:09.155 "strip_size_kb": 64, 00:17:09.155 "state": "online", 00:17:09.155 "raid_level": "raid5f", 00:17:09.155 "superblock": false, 00:17:09.155 "num_base_bdevs": 3, 00:17:09.155 "num_base_bdevs_discovered": 3, 00:17:09.155 "num_base_bdevs_operational": 3, 00:17:09.155 "process": { 00:17:09.155 "type": "rebuild", 00:17:09.155 "target": "spare", 00:17:09.155 "progress": { 00:17:09.155 "blocks": 20480, 00:17:09.155 "percent": 15 00:17:09.155 } 00:17:09.155 }, 00:17:09.155 "base_bdevs_list": [ 00:17:09.155 { 00:17:09.155 "name": "spare", 00:17:09.155 "uuid": "2ade48d9-b19b-5d61-95d3-11232367bedb", 00:17:09.155 "is_configured": true, 00:17:09.155 "data_offset": 0, 00:17:09.155 "data_size": 65536 00:17:09.155 }, 00:17:09.155 { 00:17:09.155 "name": "BaseBdev2", 00:17:09.155 "uuid": "730592e5-982c-5ac1-b3ea-42cf489854fe", 00:17:09.155 "is_configured": true, 00:17:09.155 "data_offset": 0, 00:17:09.155 "data_size": 65536 00:17:09.155 }, 00:17:09.155 { 00:17:09.155 "name": "BaseBdev3", 00:17:09.155 "uuid": "7bf16ff2-acec-5f24-97ba-5a3cde84ed59", 00:17:09.155 "is_configured": true, 00:17:09.155 "data_offset": 0, 00:17:09.155 "data_size": 65536 00:17:09.155 } 00:17:09.155 ] 00:17:09.155 }' 00:17:09.155 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.155 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.155 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.155 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.155 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:09.155 04:08:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.155 04:08:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.155 [2024-12-06 04:08:02.464789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:09.414 [2024-12-06 04:08:02.536967] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:09.414 [2024-12-06 04:08:02.537094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.414 [2024-12-06 04:08:02.537123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:09.414 [2024-12-06 04:08:02.537134] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:09.414 04:08:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.414 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:09.414 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.414 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.414 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.414 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.414 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:09.414 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.414 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.414 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.414 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.414 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.414 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.414 04:08:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.414 04:08:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.414 04:08:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.414 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.414 "name": "raid_bdev1", 00:17:09.414 "uuid": "d259ec41-5892-4d90-8215-f32b1b3f7eeb", 00:17:09.414 "strip_size_kb": 64, 00:17:09.414 "state": "online", 00:17:09.414 "raid_level": "raid5f", 00:17:09.414 "superblock": false, 00:17:09.414 "num_base_bdevs": 3, 00:17:09.414 "num_base_bdevs_discovered": 2, 00:17:09.414 "num_base_bdevs_operational": 2, 00:17:09.414 "base_bdevs_list": [ 00:17:09.414 { 00:17:09.414 "name": null, 00:17:09.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.414 "is_configured": false, 00:17:09.414 "data_offset": 0, 00:17:09.414 "data_size": 65536 00:17:09.414 }, 00:17:09.414 { 00:17:09.415 "name": "BaseBdev2", 00:17:09.415 "uuid": "730592e5-982c-5ac1-b3ea-42cf489854fe", 00:17:09.415 "is_configured": true, 00:17:09.415 "data_offset": 0, 00:17:09.415 "data_size": 65536 00:17:09.415 }, 00:17:09.415 { 00:17:09.415 "name": "BaseBdev3", 00:17:09.415 "uuid": "7bf16ff2-acec-5f24-97ba-5a3cde84ed59", 00:17:09.415 "is_configured": true, 00:17:09.415 "data_offset": 0, 00:17:09.415 "data_size": 65536 00:17:09.415 } 00:17:09.415 ] 00:17:09.415 }' 00:17:09.415 04:08:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.415 04:08:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.982 04:08:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:09.982 04:08:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.982 04:08:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:09.982 04:08:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:09.982 04:08:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.982 04:08:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.982 04:08:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.982 04:08:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.982 04:08:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.982 04:08:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.982 04:08:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.982 "name": "raid_bdev1", 00:17:09.982 "uuid": "d259ec41-5892-4d90-8215-f32b1b3f7eeb", 00:17:09.982 "strip_size_kb": 64, 00:17:09.982 "state": "online", 00:17:09.982 "raid_level": "raid5f", 00:17:09.982 "superblock": false, 00:17:09.982 "num_base_bdevs": 3, 00:17:09.982 "num_base_bdevs_discovered": 2, 00:17:09.982 "num_base_bdevs_operational": 2, 00:17:09.982 "base_bdevs_list": [ 00:17:09.982 { 00:17:09.982 "name": null, 00:17:09.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.982 "is_configured": false, 00:17:09.982 "data_offset": 0, 00:17:09.982 "data_size": 65536 00:17:09.982 }, 00:17:09.982 { 00:17:09.982 "name": "BaseBdev2", 00:17:09.982 "uuid": "730592e5-982c-5ac1-b3ea-42cf489854fe", 00:17:09.982 "is_configured": true, 00:17:09.982 "data_offset": 0, 00:17:09.982 "data_size": 65536 00:17:09.982 }, 00:17:09.982 { 00:17:09.982 "name": "BaseBdev3", 00:17:09.982 "uuid": "7bf16ff2-acec-5f24-97ba-5a3cde84ed59", 00:17:09.982 "is_configured": true, 00:17:09.982 "data_offset": 0, 00:17:09.982 "data_size": 65536 00:17:09.982 } 00:17:09.982 ] 00:17:09.982 }' 00:17:09.982 04:08:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.982 04:08:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:09.982 04:08:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.982 04:08:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:09.982 04:08:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:09.982 04:08:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.982 04:08:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.982 [2024-12-06 04:08:03.205389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:09.982 [2024-12-06 04:08:03.225238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:09.982 04:08:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.982 04:08:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:09.982 [2024-12-06 04:08:03.235188] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:10.957 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.957 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.958 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.958 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.958 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.958 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.958 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.958 04:08:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.958 04:08:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.958 04:08:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.958 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.958 "name": "raid_bdev1", 00:17:10.958 "uuid": "d259ec41-5892-4d90-8215-f32b1b3f7eeb", 00:17:10.958 "strip_size_kb": 64, 00:17:10.958 "state": "online", 00:17:10.958 "raid_level": "raid5f", 00:17:10.958 "superblock": false, 00:17:10.958 "num_base_bdevs": 3, 00:17:10.958 "num_base_bdevs_discovered": 3, 00:17:10.958 "num_base_bdevs_operational": 3, 00:17:10.958 "process": { 00:17:10.958 "type": "rebuild", 00:17:10.958 "target": "spare", 00:17:10.958 "progress": { 00:17:10.958 "blocks": 18432, 00:17:10.958 "percent": 14 00:17:10.958 } 00:17:10.958 }, 00:17:10.958 "base_bdevs_list": [ 00:17:10.958 { 00:17:10.958 "name": "spare", 00:17:10.958 "uuid": "2ade48d9-b19b-5d61-95d3-11232367bedb", 00:17:10.958 "is_configured": true, 00:17:10.958 "data_offset": 0, 00:17:10.958 "data_size": 65536 00:17:10.958 }, 00:17:10.958 { 00:17:10.958 "name": "BaseBdev2", 00:17:10.958 "uuid": "730592e5-982c-5ac1-b3ea-42cf489854fe", 00:17:10.958 "is_configured": true, 00:17:10.958 "data_offset": 0, 00:17:10.958 "data_size": 65536 00:17:10.958 }, 00:17:10.958 { 00:17:10.958 "name": "BaseBdev3", 00:17:10.958 "uuid": "7bf16ff2-acec-5f24-97ba-5a3cde84ed59", 00:17:10.958 "is_configured": true, 00:17:10.958 "data_offset": 0, 00:17:10.958 "data_size": 65536 00:17:10.958 } 00:17:10.958 ] 00:17:10.958 }' 00:17:10.958 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.216 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.216 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.216 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.216 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:11.216 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:11.216 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:11.216 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=562 00:17:11.216 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.216 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.216 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.216 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.216 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.216 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.216 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.216 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.216 04:08:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.216 04:08:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.216 04:08:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.216 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.216 "name": "raid_bdev1", 00:17:11.216 "uuid": "d259ec41-5892-4d90-8215-f32b1b3f7eeb", 00:17:11.216 "strip_size_kb": 64, 00:17:11.216 "state": "online", 00:17:11.216 "raid_level": "raid5f", 00:17:11.216 "superblock": false, 00:17:11.216 "num_base_bdevs": 3, 00:17:11.216 "num_base_bdevs_discovered": 3, 00:17:11.216 "num_base_bdevs_operational": 3, 00:17:11.216 "process": { 00:17:11.216 "type": "rebuild", 00:17:11.216 "target": "spare", 00:17:11.216 "progress": { 00:17:11.216 "blocks": 22528, 00:17:11.216 "percent": 17 00:17:11.216 } 00:17:11.216 }, 00:17:11.216 "base_bdevs_list": [ 00:17:11.216 { 00:17:11.216 "name": "spare", 00:17:11.216 "uuid": "2ade48d9-b19b-5d61-95d3-11232367bedb", 00:17:11.216 "is_configured": true, 00:17:11.216 "data_offset": 0, 00:17:11.216 "data_size": 65536 00:17:11.216 }, 00:17:11.216 { 00:17:11.216 "name": "BaseBdev2", 00:17:11.216 "uuid": "730592e5-982c-5ac1-b3ea-42cf489854fe", 00:17:11.216 "is_configured": true, 00:17:11.216 "data_offset": 0, 00:17:11.216 "data_size": 65536 00:17:11.216 }, 00:17:11.216 { 00:17:11.216 "name": "BaseBdev3", 00:17:11.216 "uuid": "7bf16ff2-acec-5f24-97ba-5a3cde84ed59", 00:17:11.216 "is_configured": true, 00:17:11.216 "data_offset": 0, 00:17:11.216 "data_size": 65536 00:17:11.216 } 00:17:11.216 ] 00:17:11.217 }' 00:17:11.217 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.217 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.217 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.217 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.217 04:08:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:12.589 04:08:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.589 04:08:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.589 04:08:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.589 04:08:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.589 04:08:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.589 04:08:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.589 04:08:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.589 04:08:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.589 04:08:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.589 04:08:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.589 04:08:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.589 04:08:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.589 "name": "raid_bdev1", 00:17:12.589 "uuid": "d259ec41-5892-4d90-8215-f32b1b3f7eeb", 00:17:12.589 "strip_size_kb": 64, 00:17:12.589 "state": "online", 00:17:12.589 "raid_level": "raid5f", 00:17:12.589 "superblock": false, 00:17:12.589 "num_base_bdevs": 3, 00:17:12.589 "num_base_bdevs_discovered": 3, 00:17:12.589 "num_base_bdevs_operational": 3, 00:17:12.589 "process": { 00:17:12.589 "type": "rebuild", 00:17:12.589 "target": "spare", 00:17:12.589 "progress": { 00:17:12.589 "blocks": 45056, 00:17:12.589 "percent": 34 00:17:12.589 } 00:17:12.589 }, 00:17:12.589 "base_bdevs_list": [ 00:17:12.589 { 00:17:12.589 "name": "spare", 00:17:12.589 "uuid": "2ade48d9-b19b-5d61-95d3-11232367bedb", 00:17:12.589 "is_configured": true, 00:17:12.589 "data_offset": 0, 00:17:12.589 "data_size": 65536 00:17:12.589 }, 00:17:12.589 { 00:17:12.589 "name": "BaseBdev2", 00:17:12.589 "uuid": "730592e5-982c-5ac1-b3ea-42cf489854fe", 00:17:12.589 "is_configured": true, 00:17:12.589 "data_offset": 0, 00:17:12.589 "data_size": 65536 00:17:12.589 }, 00:17:12.589 { 00:17:12.589 "name": "BaseBdev3", 00:17:12.589 "uuid": "7bf16ff2-acec-5f24-97ba-5a3cde84ed59", 00:17:12.589 "is_configured": true, 00:17:12.589 "data_offset": 0, 00:17:12.589 "data_size": 65536 00:17:12.589 } 00:17:12.589 ] 00:17:12.589 }' 00:17:12.589 04:08:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.589 04:08:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.589 04:08:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.589 04:08:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.589 04:08:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:13.523 04:08:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:13.523 04:08:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.523 04:08:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.523 04:08:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.523 04:08:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.523 04:08:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.523 04:08:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.523 04:08:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.523 04:08:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.523 04:08:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.523 04:08:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.523 04:08:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.524 "name": "raid_bdev1", 00:17:13.524 "uuid": "d259ec41-5892-4d90-8215-f32b1b3f7eeb", 00:17:13.524 "strip_size_kb": 64, 00:17:13.524 "state": "online", 00:17:13.524 "raid_level": "raid5f", 00:17:13.524 "superblock": false, 00:17:13.524 "num_base_bdevs": 3, 00:17:13.524 "num_base_bdevs_discovered": 3, 00:17:13.524 "num_base_bdevs_operational": 3, 00:17:13.524 "process": { 00:17:13.524 "type": "rebuild", 00:17:13.524 "target": "spare", 00:17:13.524 "progress": { 00:17:13.524 "blocks": 69632, 00:17:13.524 "percent": 53 00:17:13.524 } 00:17:13.524 }, 00:17:13.524 "base_bdevs_list": [ 00:17:13.524 { 00:17:13.524 "name": "spare", 00:17:13.524 "uuid": "2ade48d9-b19b-5d61-95d3-11232367bedb", 00:17:13.524 "is_configured": true, 00:17:13.524 "data_offset": 0, 00:17:13.524 "data_size": 65536 00:17:13.524 }, 00:17:13.524 { 00:17:13.524 "name": "BaseBdev2", 00:17:13.524 "uuid": "730592e5-982c-5ac1-b3ea-42cf489854fe", 00:17:13.524 "is_configured": true, 00:17:13.524 "data_offset": 0, 00:17:13.524 "data_size": 65536 00:17:13.524 }, 00:17:13.524 { 00:17:13.524 "name": "BaseBdev3", 00:17:13.524 "uuid": "7bf16ff2-acec-5f24-97ba-5a3cde84ed59", 00:17:13.524 "is_configured": true, 00:17:13.524 "data_offset": 0, 00:17:13.524 "data_size": 65536 00:17:13.524 } 00:17:13.524 ] 00:17:13.524 }' 00:17:13.524 04:08:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.524 04:08:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.524 04:08:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.524 04:08:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.524 04:08:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:14.901 04:08:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:14.901 04:08:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.901 04:08:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.901 04:08:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.901 04:08:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.901 04:08:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.901 04:08:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.901 04:08:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.901 04:08:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.901 04:08:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.901 04:08:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.901 04:08:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.901 "name": "raid_bdev1", 00:17:14.901 "uuid": "d259ec41-5892-4d90-8215-f32b1b3f7eeb", 00:17:14.901 "strip_size_kb": 64, 00:17:14.901 "state": "online", 00:17:14.901 "raid_level": "raid5f", 00:17:14.901 "superblock": false, 00:17:14.901 "num_base_bdevs": 3, 00:17:14.901 "num_base_bdevs_discovered": 3, 00:17:14.901 "num_base_bdevs_operational": 3, 00:17:14.901 "process": { 00:17:14.901 "type": "rebuild", 00:17:14.901 "target": "spare", 00:17:14.901 "progress": { 00:17:14.901 "blocks": 92160, 00:17:14.901 "percent": 70 00:17:14.901 } 00:17:14.901 }, 00:17:14.901 "base_bdevs_list": [ 00:17:14.901 { 00:17:14.901 "name": "spare", 00:17:14.901 "uuid": "2ade48d9-b19b-5d61-95d3-11232367bedb", 00:17:14.901 "is_configured": true, 00:17:14.901 "data_offset": 0, 00:17:14.901 "data_size": 65536 00:17:14.901 }, 00:17:14.901 { 00:17:14.901 "name": "BaseBdev2", 00:17:14.901 "uuid": "730592e5-982c-5ac1-b3ea-42cf489854fe", 00:17:14.901 "is_configured": true, 00:17:14.901 "data_offset": 0, 00:17:14.901 "data_size": 65536 00:17:14.901 }, 00:17:14.901 { 00:17:14.901 "name": "BaseBdev3", 00:17:14.901 "uuid": "7bf16ff2-acec-5f24-97ba-5a3cde84ed59", 00:17:14.901 "is_configured": true, 00:17:14.901 "data_offset": 0, 00:17:14.901 "data_size": 65536 00:17:14.901 } 00:17:14.901 ] 00:17:14.901 }' 00:17:14.901 04:08:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.901 04:08:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.901 04:08:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.901 04:08:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.901 04:08:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:15.839 04:08:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:15.839 04:08:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.839 04:08:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.839 04:08:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.839 04:08:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.839 04:08:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.839 04:08:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.839 04:08:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.839 04:08:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.839 04:08:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.839 04:08:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.839 04:08:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.839 "name": "raid_bdev1", 00:17:15.839 "uuid": "d259ec41-5892-4d90-8215-f32b1b3f7eeb", 00:17:15.839 "strip_size_kb": 64, 00:17:15.839 "state": "online", 00:17:15.839 "raid_level": "raid5f", 00:17:15.839 "superblock": false, 00:17:15.839 "num_base_bdevs": 3, 00:17:15.839 "num_base_bdevs_discovered": 3, 00:17:15.839 "num_base_bdevs_operational": 3, 00:17:15.839 "process": { 00:17:15.839 "type": "rebuild", 00:17:15.839 "target": "spare", 00:17:15.839 "progress": { 00:17:15.839 "blocks": 114688, 00:17:15.839 "percent": 87 00:17:15.839 } 00:17:15.839 }, 00:17:15.839 "base_bdevs_list": [ 00:17:15.839 { 00:17:15.839 "name": "spare", 00:17:15.839 "uuid": "2ade48d9-b19b-5d61-95d3-11232367bedb", 00:17:15.839 "is_configured": true, 00:17:15.839 "data_offset": 0, 00:17:15.839 "data_size": 65536 00:17:15.839 }, 00:17:15.839 { 00:17:15.839 "name": "BaseBdev2", 00:17:15.839 "uuid": "730592e5-982c-5ac1-b3ea-42cf489854fe", 00:17:15.839 "is_configured": true, 00:17:15.839 "data_offset": 0, 00:17:15.839 "data_size": 65536 00:17:15.839 }, 00:17:15.839 { 00:17:15.839 "name": "BaseBdev3", 00:17:15.839 "uuid": "7bf16ff2-acec-5f24-97ba-5a3cde84ed59", 00:17:15.839 "is_configured": true, 00:17:15.839 "data_offset": 0, 00:17:15.839 "data_size": 65536 00:17:15.839 } 00:17:15.839 ] 00:17:15.839 }' 00:17:15.839 04:08:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.839 04:08:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.839 04:08:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.839 04:08:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.839 04:08:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:16.444 [2024-12-06 04:08:09.696344] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:16.444 [2024-12-06 04:08:09.696548] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:16.444 [2024-12-06 04:08:09.696622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.015 "name": "raid_bdev1", 00:17:17.015 "uuid": "d259ec41-5892-4d90-8215-f32b1b3f7eeb", 00:17:17.015 "strip_size_kb": 64, 00:17:17.015 "state": "online", 00:17:17.015 "raid_level": "raid5f", 00:17:17.015 "superblock": false, 00:17:17.015 "num_base_bdevs": 3, 00:17:17.015 "num_base_bdevs_discovered": 3, 00:17:17.015 "num_base_bdevs_operational": 3, 00:17:17.015 "base_bdevs_list": [ 00:17:17.015 { 00:17:17.015 "name": "spare", 00:17:17.015 "uuid": "2ade48d9-b19b-5d61-95d3-11232367bedb", 00:17:17.015 "is_configured": true, 00:17:17.015 "data_offset": 0, 00:17:17.015 "data_size": 65536 00:17:17.015 }, 00:17:17.015 { 00:17:17.015 "name": "BaseBdev2", 00:17:17.015 "uuid": "730592e5-982c-5ac1-b3ea-42cf489854fe", 00:17:17.015 "is_configured": true, 00:17:17.015 "data_offset": 0, 00:17:17.015 "data_size": 65536 00:17:17.015 }, 00:17:17.015 { 00:17:17.015 "name": "BaseBdev3", 00:17:17.015 "uuid": "7bf16ff2-acec-5f24-97ba-5a3cde84ed59", 00:17:17.015 "is_configured": true, 00:17:17.015 "data_offset": 0, 00:17:17.015 "data_size": 65536 00:17:17.015 } 00:17:17.015 ] 00:17:17.015 }' 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.015 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.015 "name": "raid_bdev1", 00:17:17.016 "uuid": "d259ec41-5892-4d90-8215-f32b1b3f7eeb", 00:17:17.016 "strip_size_kb": 64, 00:17:17.016 "state": "online", 00:17:17.016 "raid_level": "raid5f", 00:17:17.016 "superblock": false, 00:17:17.016 "num_base_bdevs": 3, 00:17:17.016 "num_base_bdevs_discovered": 3, 00:17:17.016 "num_base_bdevs_operational": 3, 00:17:17.016 "base_bdevs_list": [ 00:17:17.016 { 00:17:17.016 "name": "spare", 00:17:17.016 "uuid": "2ade48d9-b19b-5d61-95d3-11232367bedb", 00:17:17.016 "is_configured": true, 00:17:17.016 "data_offset": 0, 00:17:17.016 "data_size": 65536 00:17:17.016 }, 00:17:17.016 { 00:17:17.016 "name": "BaseBdev2", 00:17:17.016 "uuid": "730592e5-982c-5ac1-b3ea-42cf489854fe", 00:17:17.016 "is_configured": true, 00:17:17.016 "data_offset": 0, 00:17:17.016 "data_size": 65536 00:17:17.016 }, 00:17:17.016 { 00:17:17.016 "name": "BaseBdev3", 00:17:17.016 "uuid": "7bf16ff2-acec-5f24-97ba-5a3cde84ed59", 00:17:17.016 "is_configured": true, 00:17:17.016 "data_offset": 0, 00:17:17.016 "data_size": 65536 00:17:17.016 } 00:17:17.016 ] 00:17:17.016 }' 00:17:17.016 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.276 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:17.276 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.276 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:17.276 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:17.276 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.276 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.276 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:17.276 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:17.276 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:17.276 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.276 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.276 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.276 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.276 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.276 04:08:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.276 04:08:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.276 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.276 04:08:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.276 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.276 "name": "raid_bdev1", 00:17:17.276 "uuid": "d259ec41-5892-4d90-8215-f32b1b3f7eeb", 00:17:17.276 "strip_size_kb": 64, 00:17:17.276 "state": "online", 00:17:17.276 "raid_level": "raid5f", 00:17:17.276 "superblock": false, 00:17:17.276 "num_base_bdevs": 3, 00:17:17.276 "num_base_bdevs_discovered": 3, 00:17:17.276 "num_base_bdevs_operational": 3, 00:17:17.276 "base_bdevs_list": [ 00:17:17.276 { 00:17:17.276 "name": "spare", 00:17:17.276 "uuid": "2ade48d9-b19b-5d61-95d3-11232367bedb", 00:17:17.276 "is_configured": true, 00:17:17.276 "data_offset": 0, 00:17:17.276 "data_size": 65536 00:17:17.276 }, 00:17:17.276 { 00:17:17.276 "name": "BaseBdev2", 00:17:17.276 "uuid": "730592e5-982c-5ac1-b3ea-42cf489854fe", 00:17:17.276 "is_configured": true, 00:17:17.276 "data_offset": 0, 00:17:17.276 "data_size": 65536 00:17:17.276 }, 00:17:17.276 { 00:17:17.276 "name": "BaseBdev3", 00:17:17.276 "uuid": "7bf16ff2-acec-5f24-97ba-5a3cde84ed59", 00:17:17.276 "is_configured": true, 00:17:17.276 "data_offset": 0, 00:17:17.276 "data_size": 65536 00:17:17.276 } 00:17:17.276 ] 00:17:17.276 }' 00:17:17.276 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.276 04:08:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.536 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:17.536 04:08:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.536 04:08:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.536 [2024-12-06 04:08:10.836287] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:17.536 [2024-12-06 04:08:10.836411] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:17.536 [2024-12-06 04:08:10.836518] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.536 [2024-12-06 04:08:10.836621] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.536 [2024-12-06 04:08:10.836641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:17.536 04:08:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.536 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:17.536 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.536 04:08:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.536 04:08:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.536 04:08:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.796 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:17.796 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:17.796 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:17.796 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:17.796 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:17.796 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:17.796 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:17.796 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:17.796 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:17.796 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:17.796 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:17.796 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:17.796 04:08:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:17.796 /dev/nbd0 00:17:17.796 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:17.796 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:17.796 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:17.796 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:17.796 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:17.796 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:17.796 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:17.796 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:17.796 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:17.796 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:17.796 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.796 1+0 records in 00:17:17.796 1+0 records out 00:17:17.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206882 s, 19.8 MB/s 00:17:17.796 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.796 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:17.796 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.796 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:17.796 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:17.796 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:17.796 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:17.796 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:18.056 /dev/nbd1 00:17:18.056 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:18.056 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:18.056 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:18.056 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:18.056 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:18.056 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:18.056 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:18.056 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:18.056 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:18.056 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:18.056 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:18.056 1+0 records in 00:17:18.056 1+0 records out 00:17:18.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324105 s, 12.6 MB/s 00:17:18.056 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.316 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:18.316 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.316 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:18.316 04:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:18.316 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:18.316 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:18.316 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:18.316 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:18.316 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:18.316 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:18.316 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:18.316 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:18.316 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:18.316 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:18.576 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:18.576 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:18.576 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:18.576 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:18.576 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:18.576 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:18.576 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:18.576 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:18.576 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:18.576 04:08:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:18.836 04:08:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:18.836 04:08:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:18.836 04:08:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:18.836 04:08:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:18.836 04:08:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:18.836 04:08:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:18.836 04:08:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:18.836 04:08:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:18.836 04:08:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:18.836 04:08:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81851 00:17:18.836 04:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81851 ']' 00:17:18.836 04:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81851 00:17:18.836 04:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:18.836 04:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:18.836 04:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81851 00:17:18.836 04:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:18.836 04:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:18.836 04:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81851' 00:17:18.836 killing process with pid 81851 00:17:18.836 04:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81851 00:17:18.836 Received shutdown signal, test time was about 60.000000 seconds 00:17:18.836 00:17:18.836 Latency(us) 00:17:18.836 [2024-12-06T04:08:12.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.836 [2024-12-06T04:08:12.190Z] =================================================================================================================== 00:17:18.836 [2024-12-06T04:08:12.190Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:18.836 [2024-12-06 04:08:12.113666] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:18.836 04:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81851 00:17:19.406 [2024-12-06 04:08:12.528958] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:20.788 00:17:20.788 real 0m15.617s 00:17:20.788 user 0m19.185s 00:17:20.788 sys 0m2.062s 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.788 ************************************ 00:17:20.788 END TEST raid5f_rebuild_test 00:17:20.788 ************************************ 00:17:20.788 04:08:13 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:17:20.788 04:08:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:20.788 04:08:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:20.788 04:08:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:20.788 ************************************ 00:17:20.788 START TEST raid5f_rebuild_test_sb 00:17:20.788 ************************************ 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82294 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82294 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82294 ']' 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.788 04:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.788 [2024-12-06 04:08:13.887524] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:17:20.788 [2024-12-06 04:08:13.887667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82294 ] 00:17:20.788 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:20.788 Zero copy mechanism will not be used. 00:17:20.788 [2024-12-06 04:08:14.065879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.048 [2024-12-06 04:08:14.188165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.307 [2024-12-06 04:08:14.406611] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:21.307 [2024-12-06 04:08:14.406679] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.567 BaseBdev1_malloc 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.567 [2024-12-06 04:08:14.791571] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:21.567 [2024-12-06 04:08:14.791636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.567 [2024-12-06 04:08:14.791657] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:21.567 [2024-12-06 04:08:14.791669] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.567 [2024-12-06 04:08:14.793755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.567 [2024-12-06 04:08:14.793798] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:21.567 BaseBdev1 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.567 BaseBdev2_malloc 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.567 [2024-12-06 04:08:14.846793] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:21.567 [2024-12-06 04:08:14.846849] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.567 [2024-12-06 04:08:14.846888] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:21.567 [2024-12-06 04:08:14.846899] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.567 [2024-12-06 04:08:14.849142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.567 [2024-12-06 04:08:14.849185] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:21.567 BaseBdev2 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.567 BaseBdev3_malloc 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.567 [2024-12-06 04:08:14.913844] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:21.567 [2024-12-06 04:08:14.913907] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.567 [2024-12-06 04:08:14.913932] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:21.567 [2024-12-06 04:08:14.913944] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.567 [2024-12-06 04:08:14.916071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.567 [2024-12-06 04:08:14.916109] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:21.567 BaseBdev3 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.567 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.827 spare_malloc 00:17:21.827 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.827 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:21.827 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.827 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.827 spare_delay 00:17:21.827 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.827 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:21.827 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.827 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.827 [2024-12-06 04:08:14.982455] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:21.827 [2024-12-06 04:08:14.982512] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.827 [2024-12-06 04:08:14.982533] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:21.827 [2024-12-06 04:08:14.982544] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.827 [2024-12-06 04:08:14.984941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.827 [2024-12-06 04:08:14.984988] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:21.827 spare 00:17:21.827 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.827 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:21.827 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.827 04:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.827 [2024-12-06 04:08:14.994498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:21.827 [2024-12-06 04:08:14.996507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:21.827 [2024-12-06 04:08:14.996605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:21.827 [2024-12-06 04:08:14.996812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:21.827 [2024-12-06 04:08:14.996832] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:21.827 [2024-12-06 04:08:14.997149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:21.827 [2024-12-06 04:08:15.003274] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:21.827 [2024-12-06 04:08:15.003304] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:21.827 [2024-12-06 04:08:15.003492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.827 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.827 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:21.827 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.827 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.827 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:21.827 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.827 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:21.827 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.827 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.827 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.827 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.827 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.827 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.827 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.827 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.827 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.827 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.827 "name": "raid_bdev1", 00:17:21.827 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:21.827 "strip_size_kb": 64, 00:17:21.827 "state": "online", 00:17:21.827 "raid_level": "raid5f", 00:17:21.827 "superblock": true, 00:17:21.827 "num_base_bdevs": 3, 00:17:21.827 "num_base_bdevs_discovered": 3, 00:17:21.827 "num_base_bdevs_operational": 3, 00:17:21.827 "base_bdevs_list": [ 00:17:21.827 { 00:17:21.827 "name": "BaseBdev1", 00:17:21.827 "uuid": "3e2e3d59-e76a-550e-9a74-d82f3124c537", 00:17:21.827 "is_configured": true, 00:17:21.827 "data_offset": 2048, 00:17:21.827 "data_size": 63488 00:17:21.827 }, 00:17:21.827 { 00:17:21.827 "name": "BaseBdev2", 00:17:21.827 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:21.827 "is_configured": true, 00:17:21.827 "data_offset": 2048, 00:17:21.827 "data_size": 63488 00:17:21.827 }, 00:17:21.827 { 00:17:21.827 "name": "BaseBdev3", 00:17:21.827 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:21.827 "is_configured": true, 00:17:21.827 "data_offset": 2048, 00:17:21.827 "data_size": 63488 00:17:21.827 } 00:17:21.827 ] 00:17:21.827 }' 00:17:21.827 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.827 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.394 [2024-12-06 04:08:15.478165] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:22.394 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:22.653 [2024-12-06 04:08:15.761560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:22.653 /dev/nbd0 00:17:22.653 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:22.653 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:22.653 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:22.653 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:22.653 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:22.653 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:22.653 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:22.653 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:22.653 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:22.653 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:22.653 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:22.653 1+0 records in 00:17:22.653 1+0 records out 00:17:22.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380697 s, 10.8 MB/s 00:17:22.653 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.653 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:22.653 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.653 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:22.653 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:22.653 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:22.653 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:22.653 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:22.653 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:22.653 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:22.653 04:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:17:22.912 496+0 records in 00:17:22.912 496+0 records out 00:17:22.912 65011712 bytes (65 MB, 62 MiB) copied, 0.365883 s, 178 MB/s 00:17:22.912 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:22.912 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:22.912 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:22.912 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:22.912 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:22.912 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:22.912 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:23.171 [2024-12-06 04:08:16.396926] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.171 [2024-12-06 04:08:16.429207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.171 "name": "raid_bdev1", 00:17:23.171 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:23.171 "strip_size_kb": 64, 00:17:23.171 "state": "online", 00:17:23.171 "raid_level": "raid5f", 00:17:23.171 "superblock": true, 00:17:23.171 "num_base_bdevs": 3, 00:17:23.171 "num_base_bdevs_discovered": 2, 00:17:23.171 "num_base_bdevs_operational": 2, 00:17:23.171 "base_bdevs_list": [ 00:17:23.171 { 00:17:23.171 "name": null, 00:17:23.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.171 "is_configured": false, 00:17:23.171 "data_offset": 0, 00:17:23.171 "data_size": 63488 00:17:23.171 }, 00:17:23.171 { 00:17:23.171 "name": "BaseBdev2", 00:17:23.171 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:23.171 "is_configured": true, 00:17:23.171 "data_offset": 2048, 00:17:23.171 "data_size": 63488 00:17:23.171 }, 00:17:23.171 { 00:17:23.171 "name": "BaseBdev3", 00:17:23.171 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:23.171 "is_configured": true, 00:17:23.171 "data_offset": 2048, 00:17:23.171 "data_size": 63488 00:17:23.171 } 00:17:23.171 ] 00:17:23.171 }' 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.171 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.739 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:23.739 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.739 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.739 [2024-12-06 04:08:16.848616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:23.739 [2024-12-06 04:08:16.867219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:17:23.739 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.739 04:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:23.739 [2024-12-06 04:08:16.875516] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:24.677 04:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.677 04:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.677 04:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.677 04:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.677 04:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.677 04:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.677 04:08:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.677 04:08:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.677 04:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.677 04:08:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.677 04:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.677 "name": "raid_bdev1", 00:17:24.677 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:24.677 "strip_size_kb": 64, 00:17:24.677 "state": "online", 00:17:24.677 "raid_level": "raid5f", 00:17:24.677 "superblock": true, 00:17:24.677 "num_base_bdevs": 3, 00:17:24.677 "num_base_bdevs_discovered": 3, 00:17:24.677 "num_base_bdevs_operational": 3, 00:17:24.677 "process": { 00:17:24.677 "type": "rebuild", 00:17:24.677 "target": "spare", 00:17:24.677 "progress": { 00:17:24.677 "blocks": 18432, 00:17:24.677 "percent": 14 00:17:24.677 } 00:17:24.677 }, 00:17:24.677 "base_bdevs_list": [ 00:17:24.677 { 00:17:24.677 "name": "spare", 00:17:24.677 "uuid": "fba89b4f-d19b-5150-9234-8cf1fe0ca77d", 00:17:24.677 "is_configured": true, 00:17:24.677 "data_offset": 2048, 00:17:24.677 "data_size": 63488 00:17:24.677 }, 00:17:24.677 { 00:17:24.677 "name": "BaseBdev2", 00:17:24.677 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:24.677 "is_configured": true, 00:17:24.677 "data_offset": 2048, 00:17:24.677 "data_size": 63488 00:17:24.677 }, 00:17:24.677 { 00:17:24.677 "name": "BaseBdev3", 00:17:24.677 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:24.677 "is_configured": true, 00:17:24.677 "data_offset": 2048, 00:17:24.677 "data_size": 63488 00:17:24.677 } 00:17:24.677 ] 00:17:24.677 }' 00:17:24.677 04:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.677 04:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:24.677 04:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.677 04:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.677 04:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:24.677 04:08:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.677 04:08:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.677 [2024-12-06 04:08:17.986812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:24.937 [2024-12-06 04:08:18.084799] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:24.937 [2024-12-06 04:08:18.084877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.937 [2024-12-06 04:08:18.084896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:24.937 [2024-12-06 04:08:18.084904] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:24.937 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.937 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:24.937 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.937 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.937 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.937 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.937 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.937 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.937 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.937 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.937 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.937 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.937 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.937 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.937 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.937 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.937 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.937 "name": "raid_bdev1", 00:17:24.937 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:24.937 "strip_size_kb": 64, 00:17:24.937 "state": "online", 00:17:24.937 "raid_level": "raid5f", 00:17:24.937 "superblock": true, 00:17:24.937 "num_base_bdevs": 3, 00:17:24.937 "num_base_bdevs_discovered": 2, 00:17:24.937 "num_base_bdevs_operational": 2, 00:17:24.937 "base_bdevs_list": [ 00:17:24.937 { 00:17:24.937 "name": null, 00:17:24.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.937 "is_configured": false, 00:17:24.937 "data_offset": 0, 00:17:24.937 "data_size": 63488 00:17:24.937 }, 00:17:24.937 { 00:17:24.937 "name": "BaseBdev2", 00:17:24.937 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:24.937 "is_configured": true, 00:17:24.937 "data_offset": 2048, 00:17:24.937 "data_size": 63488 00:17:24.937 }, 00:17:24.937 { 00:17:24.937 "name": "BaseBdev3", 00:17:24.937 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:24.937 "is_configured": true, 00:17:24.937 "data_offset": 2048, 00:17:24.937 "data_size": 63488 00:17:24.937 } 00:17:24.937 ] 00:17:24.937 }' 00:17:24.937 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.937 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.197 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:25.197 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.197 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:25.197 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:25.197 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.197 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.197 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.197 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.197 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.458 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.458 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.458 "name": "raid_bdev1", 00:17:25.458 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:25.458 "strip_size_kb": 64, 00:17:25.458 "state": "online", 00:17:25.458 "raid_level": "raid5f", 00:17:25.458 "superblock": true, 00:17:25.458 "num_base_bdevs": 3, 00:17:25.458 "num_base_bdevs_discovered": 2, 00:17:25.458 "num_base_bdevs_operational": 2, 00:17:25.458 "base_bdevs_list": [ 00:17:25.458 { 00:17:25.458 "name": null, 00:17:25.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.458 "is_configured": false, 00:17:25.458 "data_offset": 0, 00:17:25.458 "data_size": 63488 00:17:25.458 }, 00:17:25.458 { 00:17:25.458 "name": "BaseBdev2", 00:17:25.458 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:25.458 "is_configured": true, 00:17:25.458 "data_offset": 2048, 00:17:25.458 "data_size": 63488 00:17:25.458 }, 00:17:25.458 { 00:17:25.458 "name": "BaseBdev3", 00:17:25.458 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:25.458 "is_configured": true, 00:17:25.458 "data_offset": 2048, 00:17:25.458 "data_size": 63488 00:17:25.458 } 00:17:25.458 ] 00:17:25.458 }' 00:17:25.458 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.458 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:25.458 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.458 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:25.458 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:25.458 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.458 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.458 [2024-12-06 04:08:18.672621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:25.458 [2024-12-06 04:08:18.690511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:17:25.458 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.458 04:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:25.458 [2024-12-06 04:08:18.699146] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:26.394 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.394 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.394 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.394 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.394 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.394 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.394 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.394 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.394 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.394 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.654 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.654 "name": "raid_bdev1", 00:17:26.654 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:26.654 "strip_size_kb": 64, 00:17:26.654 "state": "online", 00:17:26.654 "raid_level": "raid5f", 00:17:26.654 "superblock": true, 00:17:26.654 "num_base_bdevs": 3, 00:17:26.654 "num_base_bdevs_discovered": 3, 00:17:26.654 "num_base_bdevs_operational": 3, 00:17:26.654 "process": { 00:17:26.654 "type": "rebuild", 00:17:26.654 "target": "spare", 00:17:26.654 "progress": { 00:17:26.654 "blocks": 18432, 00:17:26.654 "percent": 14 00:17:26.654 } 00:17:26.654 }, 00:17:26.654 "base_bdevs_list": [ 00:17:26.654 { 00:17:26.654 "name": "spare", 00:17:26.654 "uuid": "fba89b4f-d19b-5150-9234-8cf1fe0ca77d", 00:17:26.654 "is_configured": true, 00:17:26.654 "data_offset": 2048, 00:17:26.654 "data_size": 63488 00:17:26.654 }, 00:17:26.654 { 00:17:26.654 "name": "BaseBdev2", 00:17:26.654 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:26.654 "is_configured": true, 00:17:26.654 "data_offset": 2048, 00:17:26.654 "data_size": 63488 00:17:26.654 }, 00:17:26.654 { 00:17:26.654 "name": "BaseBdev3", 00:17:26.654 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:26.654 "is_configured": true, 00:17:26.654 "data_offset": 2048, 00:17:26.654 "data_size": 63488 00:17:26.654 } 00:17:26.654 ] 00:17:26.654 }' 00:17:26.654 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.654 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.654 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.654 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.654 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:26.654 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:26.654 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:26.654 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:26.654 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:26.654 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=577 00:17:26.654 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:26.654 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.654 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.654 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.654 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.654 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.654 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.654 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.654 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.654 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.654 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.654 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.655 "name": "raid_bdev1", 00:17:26.655 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:26.655 "strip_size_kb": 64, 00:17:26.655 "state": "online", 00:17:26.655 "raid_level": "raid5f", 00:17:26.655 "superblock": true, 00:17:26.655 "num_base_bdevs": 3, 00:17:26.655 "num_base_bdevs_discovered": 3, 00:17:26.655 "num_base_bdevs_operational": 3, 00:17:26.655 "process": { 00:17:26.655 "type": "rebuild", 00:17:26.655 "target": "spare", 00:17:26.655 "progress": { 00:17:26.655 "blocks": 22528, 00:17:26.655 "percent": 17 00:17:26.655 } 00:17:26.655 }, 00:17:26.655 "base_bdevs_list": [ 00:17:26.655 { 00:17:26.655 "name": "spare", 00:17:26.655 "uuid": "fba89b4f-d19b-5150-9234-8cf1fe0ca77d", 00:17:26.655 "is_configured": true, 00:17:26.655 "data_offset": 2048, 00:17:26.655 "data_size": 63488 00:17:26.655 }, 00:17:26.655 { 00:17:26.655 "name": "BaseBdev2", 00:17:26.655 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:26.655 "is_configured": true, 00:17:26.655 "data_offset": 2048, 00:17:26.655 "data_size": 63488 00:17:26.655 }, 00:17:26.655 { 00:17:26.655 "name": "BaseBdev3", 00:17:26.655 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:26.655 "is_configured": true, 00:17:26.655 "data_offset": 2048, 00:17:26.655 "data_size": 63488 00:17:26.655 } 00:17:26.655 ] 00:17:26.655 }' 00:17:26.655 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.655 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.655 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.655 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.655 04:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:28.035 04:08:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:28.036 04:08:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.036 04:08:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.036 04:08:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.036 04:08:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.036 04:08:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.036 04:08:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.036 04:08:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.036 04:08:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.036 04:08:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.036 04:08:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.036 04:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.036 "name": "raid_bdev1", 00:17:28.036 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:28.036 "strip_size_kb": 64, 00:17:28.036 "state": "online", 00:17:28.036 "raid_level": "raid5f", 00:17:28.036 "superblock": true, 00:17:28.036 "num_base_bdevs": 3, 00:17:28.036 "num_base_bdevs_discovered": 3, 00:17:28.036 "num_base_bdevs_operational": 3, 00:17:28.036 "process": { 00:17:28.036 "type": "rebuild", 00:17:28.036 "target": "spare", 00:17:28.036 "progress": { 00:17:28.036 "blocks": 45056, 00:17:28.036 "percent": 35 00:17:28.036 } 00:17:28.036 }, 00:17:28.036 "base_bdevs_list": [ 00:17:28.036 { 00:17:28.036 "name": "spare", 00:17:28.036 "uuid": "fba89b4f-d19b-5150-9234-8cf1fe0ca77d", 00:17:28.036 "is_configured": true, 00:17:28.036 "data_offset": 2048, 00:17:28.036 "data_size": 63488 00:17:28.036 }, 00:17:28.036 { 00:17:28.036 "name": "BaseBdev2", 00:17:28.036 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:28.036 "is_configured": true, 00:17:28.036 "data_offset": 2048, 00:17:28.036 "data_size": 63488 00:17:28.036 }, 00:17:28.036 { 00:17:28.036 "name": "BaseBdev3", 00:17:28.036 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:28.036 "is_configured": true, 00:17:28.036 "data_offset": 2048, 00:17:28.036 "data_size": 63488 00:17:28.036 } 00:17:28.036 ] 00:17:28.036 }' 00:17:28.036 04:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.036 04:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.036 04:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.036 04:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.036 04:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:28.994 04:08:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:28.994 04:08:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.994 04:08:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.994 04:08:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.994 04:08:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.994 04:08:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.994 04:08:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.994 04:08:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.994 04:08:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.994 04:08:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.994 04:08:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.994 04:08:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.994 "name": "raid_bdev1", 00:17:28.994 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:28.994 "strip_size_kb": 64, 00:17:28.994 "state": "online", 00:17:28.994 "raid_level": "raid5f", 00:17:28.994 "superblock": true, 00:17:28.994 "num_base_bdevs": 3, 00:17:28.994 "num_base_bdevs_discovered": 3, 00:17:28.994 "num_base_bdevs_operational": 3, 00:17:28.994 "process": { 00:17:28.994 "type": "rebuild", 00:17:28.994 "target": "spare", 00:17:28.994 "progress": { 00:17:28.994 "blocks": 67584, 00:17:28.994 "percent": 53 00:17:28.994 } 00:17:28.994 }, 00:17:28.994 "base_bdevs_list": [ 00:17:28.994 { 00:17:28.994 "name": "spare", 00:17:28.994 "uuid": "fba89b4f-d19b-5150-9234-8cf1fe0ca77d", 00:17:28.994 "is_configured": true, 00:17:28.994 "data_offset": 2048, 00:17:28.994 "data_size": 63488 00:17:28.994 }, 00:17:28.994 { 00:17:28.994 "name": "BaseBdev2", 00:17:28.994 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:28.994 "is_configured": true, 00:17:28.994 "data_offset": 2048, 00:17:28.994 "data_size": 63488 00:17:28.994 }, 00:17:28.994 { 00:17:28.994 "name": "BaseBdev3", 00:17:28.994 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:28.994 "is_configured": true, 00:17:28.994 "data_offset": 2048, 00:17:28.994 "data_size": 63488 00:17:28.994 } 00:17:28.994 ] 00:17:28.994 }' 00:17:28.994 04:08:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.994 04:08:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.994 04:08:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.994 04:08:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.994 04:08:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:29.928 04:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:29.928 04:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.928 04:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.928 04:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.928 04:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.928 04:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.928 04:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.928 04:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.928 04:08:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.928 04:08:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.187 04:08:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.187 04:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.187 "name": "raid_bdev1", 00:17:30.187 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:30.187 "strip_size_kb": 64, 00:17:30.187 "state": "online", 00:17:30.187 "raid_level": "raid5f", 00:17:30.187 "superblock": true, 00:17:30.187 "num_base_bdevs": 3, 00:17:30.187 "num_base_bdevs_discovered": 3, 00:17:30.187 "num_base_bdevs_operational": 3, 00:17:30.187 "process": { 00:17:30.187 "type": "rebuild", 00:17:30.187 "target": "spare", 00:17:30.187 "progress": { 00:17:30.187 "blocks": 92160, 00:17:30.187 "percent": 72 00:17:30.187 } 00:17:30.187 }, 00:17:30.187 "base_bdevs_list": [ 00:17:30.187 { 00:17:30.187 "name": "spare", 00:17:30.187 "uuid": "fba89b4f-d19b-5150-9234-8cf1fe0ca77d", 00:17:30.187 "is_configured": true, 00:17:30.187 "data_offset": 2048, 00:17:30.187 "data_size": 63488 00:17:30.187 }, 00:17:30.187 { 00:17:30.187 "name": "BaseBdev2", 00:17:30.187 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:30.187 "is_configured": true, 00:17:30.187 "data_offset": 2048, 00:17:30.187 "data_size": 63488 00:17:30.187 }, 00:17:30.187 { 00:17:30.187 "name": "BaseBdev3", 00:17:30.187 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:30.187 "is_configured": true, 00:17:30.187 "data_offset": 2048, 00:17:30.187 "data_size": 63488 00:17:30.187 } 00:17:30.187 ] 00:17:30.187 }' 00:17:30.187 04:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.187 04:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.187 04:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.187 04:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.187 04:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:31.123 04:08:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:31.123 04:08:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:31.123 04:08:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.123 04:08:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:31.123 04:08:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:31.123 04:08:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.123 04:08:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.123 04:08:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.123 04:08:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.123 04:08:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.123 04:08:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.123 04:08:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.123 "name": "raid_bdev1", 00:17:31.123 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:31.123 "strip_size_kb": 64, 00:17:31.123 "state": "online", 00:17:31.123 "raid_level": "raid5f", 00:17:31.123 "superblock": true, 00:17:31.123 "num_base_bdevs": 3, 00:17:31.123 "num_base_bdevs_discovered": 3, 00:17:31.123 "num_base_bdevs_operational": 3, 00:17:31.123 "process": { 00:17:31.123 "type": "rebuild", 00:17:31.123 "target": "spare", 00:17:31.123 "progress": { 00:17:31.123 "blocks": 114688, 00:17:31.123 "percent": 90 00:17:31.123 } 00:17:31.123 }, 00:17:31.123 "base_bdevs_list": [ 00:17:31.123 { 00:17:31.123 "name": "spare", 00:17:31.123 "uuid": "fba89b4f-d19b-5150-9234-8cf1fe0ca77d", 00:17:31.123 "is_configured": true, 00:17:31.123 "data_offset": 2048, 00:17:31.123 "data_size": 63488 00:17:31.123 }, 00:17:31.123 { 00:17:31.123 "name": "BaseBdev2", 00:17:31.123 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:31.123 "is_configured": true, 00:17:31.123 "data_offset": 2048, 00:17:31.123 "data_size": 63488 00:17:31.123 }, 00:17:31.123 { 00:17:31.123 "name": "BaseBdev3", 00:17:31.123 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:31.123 "is_configured": true, 00:17:31.123 "data_offset": 2048, 00:17:31.123 "data_size": 63488 00:17:31.123 } 00:17:31.123 ] 00:17:31.123 }' 00:17:31.123 04:08:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.383 04:08:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:31.383 04:08:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.383 04:08:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:31.383 04:08:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:31.644 [2024-12-06 04:08:24.953770] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:31.644 [2024-12-06 04:08:24.953881] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:31.644 [2024-12-06 04:08:24.954014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.217 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:32.217 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:32.217 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.217 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:32.217 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:32.217 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.217 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.217 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.217 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.217 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.217 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.476 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.476 "name": "raid_bdev1", 00:17:32.476 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:32.476 "strip_size_kb": 64, 00:17:32.476 "state": "online", 00:17:32.476 "raid_level": "raid5f", 00:17:32.476 "superblock": true, 00:17:32.476 "num_base_bdevs": 3, 00:17:32.476 "num_base_bdevs_discovered": 3, 00:17:32.476 "num_base_bdevs_operational": 3, 00:17:32.476 "base_bdevs_list": [ 00:17:32.476 { 00:17:32.476 "name": "spare", 00:17:32.476 "uuid": "fba89b4f-d19b-5150-9234-8cf1fe0ca77d", 00:17:32.476 "is_configured": true, 00:17:32.476 "data_offset": 2048, 00:17:32.476 "data_size": 63488 00:17:32.476 }, 00:17:32.476 { 00:17:32.476 "name": "BaseBdev2", 00:17:32.476 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:32.476 "is_configured": true, 00:17:32.476 "data_offset": 2048, 00:17:32.476 "data_size": 63488 00:17:32.476 }, 00:17:32.476 { 00:17:32.476 "name": "BaseBdev3", 00:17:32.476 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:32.476 "is_configured": true, 00:17:32.476 "data_offset": 2048, 00:17:32.476 "data_size": 63488 00:17:32.476 } 00:17:32.476 ] 00:17:32.476 }' 00:17:32.476 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.476 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:32.476 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.476 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:32.476 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:32.476 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:32.476 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.476 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:32.476 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:32.476 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.476 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.476 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.476 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.476 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.476 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.476 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.476 "name": "raid_bdev1", 00:17:32.476 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:32.476 "strip_size_kb": 64, 00:17:32.476 "state": "online", 00:17:32.476 "raid_level": "raid5f", 00:17:32.476 "superblock": true, 00:17:32.476 "num_base_bdevs": 3, 00:17:32.476 "num_base_bdevs_discovered": 3, 00:17:32.476 "num_base_bdevs_operational": 3, 00:17:32.476 "base_bdevs_list": [ 00:17:32.476 { 00:17:32.477 "name": "spare", 00:17:32.477 "uuid": "fba89b4f-d19b-5150-9234-8cf1fe0ca77d", 00:17:32.477 "is_configured": true, 00:17:32.477 "data_offset": 2048, 00:17:32.477 "data_size": 63488 00:17:32.477 }, 00:17:32.477 { 00:17:32.477 "name": "BaseBdev2", 00:17:32.477 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:32.477 "is_configured": true, 00:17:32.477 "data_offset": 2048, 00:17:32.477 "data_size": 63488 00:17:32.477 }, 00:17:32.477 { 00:17:32.477 "name": "BaseBdev3", 00:17:32.477 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:32.477 "is_configured": true, 00:17:32.477 "data_offset": 2048, 00:17:32.477 "data_size": 63488 00:17:32.477 } 00:17:32.477 ] 00:17:32.477 }' 00:17:32.477 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.477 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:32.477 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.477 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:32.477 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:32.477 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.477 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.477 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:32.477 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.477 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:32.477 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.477 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.477 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.477 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.477 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.477 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.477 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.477 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.477 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.735 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.735 "name": "raid_bdev1", 00:17:32.735 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:32.735 "strip_size_kb": 64, 00:17:32.735 "state": "online", 00:17:32.735 "raid_level": "raid5f", 00:17:32.735 "superblock": true, 00:17:32.735 "num_base_bdevs": 3, 00:17:32.735 "num_base_bdevs_discovered": 3, 00:17:32.736 "num_base_bdevs_operational": 3, 00:17:32.736 "base_bdevs_list": [ 00:17:32.736 { 00:17:32.736 "name": "spare", 00:17:32.736 "uuid": "fba89b4f-d19b-5150-9234-8cf1fe0ca77d", 00:17:32.736 "is_configured": true, 00:17:32.736 "data_offset": 2048, 00:17:32.736 "data_size": 63488 00:17:32.736 }, 00:17:32.736 { 00:17:32.736 "name": "BaseBdev2", 00:17:32.736 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:32.736 "is_configured": true, 00:17:32.736 "data_offset": 2048, 00:17:32.736 "data_size": 63488 00:17:32.736 }, 00:17:32.736 { 00:17:32.736 "name": "BaseBdev3", 00:17:32.736 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:32.736 "is_configured": true, 00:17:32.736 "data_offset": 2048, 00:17:32.736 "data_size": 63488 00:17:32.736 } 00:17:32.736 ] 00:17:32.736 }' 00:17:32.736 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.736 04:08:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.994 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:32.994 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.994 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.994 [2024-12-06 04:08:26.253235] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:32.994 [2024-12-06 04:08:26.253282] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.994 [2024-12-06 04:08:26.253389] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.994 [2024-12-06 04:08:26.253501] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.994 [2024-12-06 04:08:26.253521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:32.994 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.994 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:32.994 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.994 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.994 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.994 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.994 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:32.994 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:32.994 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:32.994 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:32.994 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:32.994 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:32.994 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:32.994 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:32.994 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:32.994 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:32.994 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:32.994 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:32.994 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:33.252 /dev/nbd0 00:17:33.252 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:33.252 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:33.252 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:33.252 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:33.252 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:33.252 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:33.252 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:33.252 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:33.252 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:33.252 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:33.252 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:33.252 1+0 records in 00:17:33.252 1+0 records out 00:17:33.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472877 s, 8.7 MB/s 00:17:33.252 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.252 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:33.252 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.252 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:33.252 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:33.252 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:33.252 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:33.252 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:33.511 /dev/nbd1 00:17:33.511 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:33.511 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:33.511 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:33.511 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:33.511 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:33.511 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:33.511 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:33.511 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:33.511 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:33.511 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:33.511 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:33.511 1+0 records in 00:17:33.511 1+0 records out 00:17:33.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394201 s, 10.4 MB/s 00:17:33.511 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.790 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:33.790 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.790 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:33.790 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:33.790 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:33.790 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:33.790 04:08:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:33.790 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:33.790 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:33.790 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:33.790 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:33.790 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:33.790 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:33.790 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:34.049 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:34.049 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:34.049 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:34.049 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:34.049 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:34.049 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:34.049 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:34.049 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:34.049 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:34.049 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:34.309 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:34.309 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:34.309 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:34.309 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:34.309 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:34.309 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:34.309 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:34.309 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:34.309 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:34.309 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:34.309 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.309 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.309 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.309 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:34.309 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.309 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.309 [2024-12-06 04:08:27.571153] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:34.309 [2024-12-06 04:08:27.571277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.309 [2024-12-06 04:08:27.571309] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:34.309 [2024-12-06 04:08:27.571323] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.309 [2024-12-06 04:08:27.573961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.309 [2024-12-06 04:08:27.574004] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:34.309 [2024-12-06 04:08:27.574104] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:34.309 [2024-12-06 04:08:27.574161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:34.309 [2024-12-06 04:08:27.574313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:34.309 [2024-12-06 04:08:27.574440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:34.309 spare 00:17:34.309 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.309 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:34.309 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.309 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.568 [2024-12-06 04:08:27.674366] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:34.568 [2024-12-06 04:08:27.674481] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:34.568 [2024-12-06 04:08:27.674884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:17:34.568 [2024-12-06 04:08:27.681009] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:34.568 [2024-12-06 04:08:27.681097] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:34.568 [2024-12-06 04:08:27.681411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.568 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.568 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:34.568 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.568 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.568 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:34.568 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.568 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:34.568 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.568 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.568 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.568 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.568 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.568 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.568 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.568 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.568 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.568 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.568 "name": "raid_bdev1", 00:17:34.568 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:34.568 "strip_size_kb": 64, 00:17:34.568 "state": "online", 00:17:34.568 "raid_level": "raid5f", 00:17:34.568 "superblock": true, 00:17:34.568 "num_base_bdevs": 3, 00:17:34.568 "num_base_bdevs_discovered": 3, 00:17:34.568 "num_base_bdevs_operational": 3, 00:17:34.568 "base_bdevs_list": [ 00:17:34.568 { 00:17:34.568 "name": "spare", 00:17:34.568 "uuid": "fba89b4f-d19b-5150-9234-8cf1fe0ca77d", 00:17:34.568 "is_configured": true, 00:17:34.568 "data_offset": 2048, 00:17:34.568 "data_size": 63488 00:17:34.568 }, 00:17:34.568 { 00:17:34.568 "name": "BaseBdev2", 00:17:34.568 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:34.568 "is_configured": true, 00:17:34.568 "data_offset": 2048, 00:17:34.568 "data_size": 63488 00:17:34.568 }, 00:17:34.568 { 00:17:34.568 "name": "BaseBdev3", 00:17:34.568 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:34.568 "is_configured": true, 00:17:34.568 "data_offset": 2048, 00:17:34.568 "data_size": 63488 00:17:34.568 } 00:17:34.568 ] 00:17:34.568 }' 00:17:34.568 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.568 04:08:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.827 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:34.827 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.827 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:34.827 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:34.827 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.827 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.827 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.827 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.827 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.827 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.827 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.827 "name": "raid_bdev1", 00:17:34.827 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:34.827 "strip_size_kb": 64, 00:17:34.827 "state": "online", 00:17:34.827 "raid_level": "raid5f", 00:17:34.827 "superblock": true, 00:17:34.827 "num_base_bdevs": 3, 00:17:34.827 "num_base_bdevs_discovered": 3, 00:17:34.827 "num_base_bdevs_operational": 3, 00:17:34.827 "base_bdevs_list": [ 00:17:34.827 { 00:17:34.827 "name": "spare", 00:17:34.827 "uuid": "fba89b4f-d19b-5150-9234-8cf1fe0ca77d", 00:17:34.827 "is_configured": true, 00:17:34.827 "data_offset": 2048, 00:17:34.827 "data_size": 63488 00:17:34.827 }, 00:17:34.827 { 00:17:34.827 "name": "BaseBdev2", 00:17:34.827 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:34.827 "is_configured": true, 00:17:34.827 "data_offset": 2048, 00:17:34.827 "data_size": 63488 00:17:34.827 }, 00:17:34.827 { 00:17:34.827 "name": "BaseBdev3", 00:17:34.827 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:34.827 "is_configured": true, 00:17:34.828 "data_offset": 2048, 00:17:34.828 "data_size": 63488 00:17:34.828 } 00:17:34.828 ] 00:17:34.828 }' 00:17:34.828 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.087 [2024-12-06 04:08:28.287625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.087 "name": "raid_bdev1", 00:17:35.087 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:35.087 "strip_size_kb": 64, 00:17:35.087 "state": "online", 00:17:35.087 "raid_level": "raid5f", 00:17:35.087 "superblock": true, 00:17:35.087 "num_base_bdevs": 3, 00:17:35.087 "num_base_bdevs_discovered": 2, 00:17:35.087 "num_base_bdevs_operational": 2, 00:17:35.087 "base_bdevs_list": [ 00:17:35.087 { 00:17:35.087 "name": null, 00:17:35.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.087 "is_configured": false, 00:17:35.087 "data_offset": 0, 00:17:35.087 "data_size": 63488 00:17:35.087 }, 00:17:35.087 { 00:17:35.087 "name": "BaseBdev2", 00:17:35.087 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:35.087 "is_configured": true, 00:17:35.087 "data_offset": 2048, 00:17:35.087 "data_size": 63488 00:17:35.087 }, 00:17:35.087 { 00:17:35.087 "name": "BaseBdev3", 00:17:35.087 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:35.087 "is_configured": true, 00:17:35.087 "data_offset": 2048, 00:17:35.087 "data_size": 63488 00:17:35.087 } 00:17:35.087 ] 00:17:35.087 }' 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.087 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.347 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:35.347 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.347 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.347 [2024-12-06 04:08:28.694963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:35.347 [2024-12-06 04:08:28.695251] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:35.347 [2024-12-06 04:08:28.695358] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:35.347 [2024-12-06 04:08:28.695460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:35.607 [2024-12-06 04:08:28.712840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:17:35.608 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.608 04:08:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:35.608 [2024-12-06 04:08:28.721121] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:36.548 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.548 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.548 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.548 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.548 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.548 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.548 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.548 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.548 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.548 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.548 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.548 "name": "raid_bdev1", 00:17:36.548 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:36.548 "strip_size_kb": 64, 00:17:36.548 "state": "online", 00:17:36.548 "raid_level": "raid5f", 00:17:36.548 "superblock": true, 00:17:36.548 "num_base_bdevs": 3, 00:17:36.548 "num_base_bdevs_discovered": 3, 00:17:36.548 "num_base_bdevs_operational": 3, 00:17:36.548 "process": { 00:17:36.548 "type": "rebuild", 00:17:36.548 "target": "spare", 00:17:36.548 "progress": { 00:17:36.548 "blocks": 20480, 00:17:36.548 "percent": 16 00:17:36.548 } 00:17:36.548 }, 00:17:36.548 "base_bdevs_list": [ 00:17:36.548 { 00:17:36.548 "name": "spare", 00:17:36.548 "uuid": "fba89b4f-d19b-5150-9234-8cf1fe0ca77d", 00:17:36.548 "is_configured": true, 00:17:36.548 "data_offset": 2048, 00:17:36.548 "data_size": 63488 00:17:36.548 }, 00:17:36.548 { 00:17:36.548 "name": "BaseBdev2", 00:17:36.548 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:36.548 "is_configured": true, 00:17:36.548 "data_offset": 2048, 00:17:36.548 "data_size": 63488 00:17:36.548 }, 00:17:36.548 { 00:17:36.548 "name": "BaseBdev3", 00:17:36.548 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:36.548 "is_configured": true, 00:17:36.548 "data_offset": 2048, 00:17:36.548 "data_size": 63488 00:17:36.548 } 00:17:36.548 ] 00:17:36.548 }' 00:17:36.548 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.548 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.548 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.548 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.548 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:36.548 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.548 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.548 [2024-12-06 04:08:29.876808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.808 [2024-12-06 04:08:29.930950] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:36.808 [2024-12-06 04:08:29.931054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.808 [2024-12-06 04:08:29.931092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.808 [2024-12-06 04:08:29.931104] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:36.808 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.808 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:36.808 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.808 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.808 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.808 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.808 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:36.809 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.809 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.809 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.809 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.809 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.809 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.809 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.809 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.809 04:08:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.809 04:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.809 "name": "raid_bdev1", 00:17:36.809 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:36.809 "strip_size_kb": 64, 00:17:36.809 "state": "online", 00:17:36.809 "raid_level": "raid5f", 00:17:36.809 "superblock": true, 00:17:36.809 "num_base_bdevs": 3, 00:17:36.809 "num_base_bdevs_discovered": 2, 00:17:36.809 "num_base_bdevs_operational": 2, 00:17:36.809 "base_bdevs_list": [ 00:17:36.809 { 00:17:36.809 "name": null, 00:17:36.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.809 "is_configured": false, 00:17:36.809 "data_offset": 0, 00:17:36.809 "data_size": 63488 00:17:36.809 }, 00:17:36.809 { 00:17:36.809 "name": "BaseBdev2", 00:17:36.809 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:36.809 "is_configured": true, 00:17:36.809 "data_offset": 2048, 00:17:36.809 "data_size": 63488 00:17:36.809 }, 00:17:36.809 { 00:17:36.809 "name": "BaseBdev3", 00:17:36.809 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:36.809 "is_configured": true, 00:17:36.809 "data_offset": 2048, 00:17:36.809 "data_size": 63488 00:17:36.809 } 00:17:36.809 ] 00:17:36.809 }' 00:17:36.809 04:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.809 04:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.068 04:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:37.068 04:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.068 04:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.068 [2024-12-06 04:08:30.382724] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:37.068 [2024-12-06 04:08:30.382885] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.068 [2024-12-06 04:08:30.382931] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:17:37.069 [2024-12-06 04:08:30.382980] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.069 [2024-12-06 04:08:30.383554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.069 [2024-12-06 04:08:30.383633] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:37.069 [2024-12-06 04:08:30.383776] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:37.069 [2024-12-06 04:08:30.383828] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:37.069 [2024-12-06 04:08:30.383877] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:37.069 [2024-12-06 04:08:30.383931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:37.069 [2024-12-06 04:08:30.399597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:17:37.069 spare 00:17:37.069 04:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.069 04:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:37.069 [2024-12-06 04:08:30.406866] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.468 "name": "raid_bdev1", 00:17:38.468 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:38.468 "strip_size_kb": 64, 00:17:38.468 "state": "online", 00:17:38.468 "raid_level": "raid5f", 00:17:38.468 "superblock": true, 00:17:38.468 "num_base_bdevs": 3, 00:17:38.468 "num_base_bdevs_discovered": 3, 00:17:38.468 "num_base_bdevs_operational": 3, 00:17:38.468 "process": { 00:17:38.468 "type": "rebuild", 00:17:38.468 "target": "spare", 00:17:38.468 "progress": { 00:17:38.468 "blocks": 20480, 00:17:38.468 "percent": 16 00:17:38.468 } 00:17:38.468 }, 00:17:38.468 "base_bdevs_list": [ 00:17:38.468 { 00:17:38.468 "name": "spare", 00:17:38.468 "uuid": "fba89b4f-d19b-5150-9234-8cf1fe0ca77d", 00:17:38.468 "is_configured": true, 00:17:38.468 "data_offset": 2048, 00:17:38.468 "data_size": 63488 00:17:38.468 }, 00:17:38.468 { 00:17:38.468 "name": "BaseBdev2", 00:17:38.468 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:38.468 "is_configured": true, 00:17:38.468 "data_offset": 2048, 00:17:38.468 "data_size": 63488 00:17:38.468 }, 00:17:38.468 { 00:17:38.468 "name": "BaseBdev3", 00:17:38.468 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:38.468 "is_configured": true, 00:17:38.468 "data_offset": 2048, 00:17:38.468 "data_size": 63488 00:17:38.468 } 00:17:38.468 ] 00:17:38.468 }' 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.468 [2024-12-06 04:08:31.538206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:38.468 [2024-12-06 04:08:31.615793] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:38.468 [2024-12-06 04:08:31.615865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.468 [2024-12-06 04:08:31.615887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:38.468 [2024-12-06 04:08:31.615897] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.468 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.468 "name": "raid_bdev1", 00:17:38.468 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:38.468 "strip_size_kb": 64, 00:17:38.468 "state": "online", 00:17:38.468 "raid_level": "raid5f", 00:17:38.468 "superblock": true, 00:17:38.468 "num_base_bdevs": 3, 00:17:38.468 "num_base_bdevs_discovered": 2, 00:17:38.468 "num_base_bdevs_operational": 2, 00:17:38.468 "base_bdevs_list": [ 00:17:38.468 { 00:17:38.468 "name": null, 00:17:38.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.468 "is_configured": false, 00:17:38.468 "data_offset": 0, 00:17:38.468 "data_size": 63488 00:17:38.468 }, 00:17:38.468 { 00:17:38.468 "name": "BaseBdev2", 00:17:38.469 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:38.469 "is_configured": true, 00:17:38.469 "data_offset": 2048, 00:17:38.469 "data_size": 63488 00:17:38.469 }, 00:17:38.469 { 00:17:38.469 "name": "BaseBdev3", 00:17:38.469 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:38.469 "is_configured": true, 00:17:38.469 "data_offset": 2048, 00:17:38.469 "data_size": 63488 00:17:38.469 } 00:17:38.469 ] 00:17:38.469 }' 00:17:38.469 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.469 04:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.038 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:39.038 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.038 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:39.038 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:39.038 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.038 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.038 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.038 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.038 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.038 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.038 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.038 "name": "raid_bdev1", 00:17:39.038 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:39.038 "strip_size_kb": 64, 00:17:39.038 "state": "online", 00:17:39.038 "raid_level": "raid5f", 00:17:39.038 "superblock": true, 00:17:39.038 "num_base_bdevs": 3, 00:17:39.038 "num_base_bdevs_discovered": 2, 00:17:39.038 "num_base_bdevs_operational": 2, 00:17:39.038 "base_bdevs_list": [ 00:17:39.038 { 00:17:39.038 "name": null, 00:17:39.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.038 "is_configured": false, 00:17:39.038 "data_offset": 0, 00:17:39.038 "data_size": 63488 00:17:39.038 }, 00:17:39.038 { 00:17:39.038 "name": "BaseBdev2", 00:17:39.038 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:39.038 "is_configured": true, 00:17:39.038 "data_offset": 2048, 00:17:39.038 "data_size": 63488 00:17:39.038 }, 00:17:39.038 { 00:17:39.038 "name": "BaseBdev3", 00:17:39.038 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:39.038 "is_configured": true, 00:17:39.038 "data_offset": 2048, 00:17:39.038 "data_size": 63488 00:17:39.038 } 00:17:39.038 ] 00:17:39.038 }' 00:17:39.038 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.038 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:39.038 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.038 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:39.038 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:39.038 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.038 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.038 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.038 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:39.038 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.038 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.038 [2024-12-06 04:08:32.277103] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:39.038 [2024-12-06 04:08:32.277160] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.038 [2024-12-06 04:08:32.277186] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:39.038 [2024-12-06 04:08:32.277196] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.038 [2024-12-06 04:08:32.277634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.038 [2024-12-06 04:08:32.277653] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:39.039 [2024-12-06 04:08:32.277731] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:39.039 [2024-12-06 04:08:32.277749] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:39.039 [2024-12-06 04:08:32.277767] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:39.039 [2024-12-06 04:08:32.277777] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:39.039 BaseBdev1 00:17:39.039 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.039 04:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:40.005 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:40.005 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.005 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.005 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.005 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.005 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:40.005 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.005 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.005 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.005 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.005 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.005 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.005 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.005 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.005 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.005 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.005 "name": "raid_bdev1", 00:17:40.005 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:40.005 "strip_size_kb": 64, 00:17:40.005 "state": "online", 00:17:40.005 "raid_level": "raid5f", 00:17:40.005 "superblock": true, 00:17:40.005 "num_base_bdevs": 3, 00:17:40.005 "num_base_bdevs_discovered": 2, 00:17:40.005 "num_base_bdevs_operational": 2, 00:17:40.005 "base_bdevs_list": [ 00:17:40.005 { 00:17:40.005 "name": null, 00:17:40.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.005 "is_configured": false, 00:17:40.005 "data_offset": 0, 00:17:40.005 "data_size": 63488 00:17:40.005 }, 00:17:40.005 { 00:17:40.005 "name": "BaseBdev2", 00:17:40.005 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:40.005 "is_configured": true, 00:17:40.005 "data_offset": 2048, 00:17:40.005 "data_size": 63488 00:17:40.005 }, 00:17:40.005 { 00:17:40.005 "name": "BaseBdev3", 00:17:40.005 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:40.005 "is_configured": true, 00:17:40.005 "data_offset": 2048, 00:17:40.005 "data_size": 63488 00:17:40.005 } 00:17:40.005 ] 00:17:40.005 }' 00:17:40.005 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.005 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.601 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:40.601 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.601 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:40.601 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:40.601 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.601 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.601 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.601 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.601 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.601 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.601 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.601 "name": "raid_bdev1", 00:17:40.601 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:40.601 "strip_size_kb": 64, 00:17:40.601 "state": "online", 00:17:40.601 "raid_level": "raid5f", 00:17:40.601 "superblock": true, 00:17:40.601 "num_base_bdevs": 3, 00:17:40.601 "num_base_bdevs_discovered": 2, 00:17:40.601 "num_base_bdevs_operational": 2, 00:17:40.601 "base_bdevs_list": [ 00:17:40.601 { 00:17:40.601 "name": null, 00:17:40.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.601 "is_configured": false, 00:17:40.601 "data_offset": 0, 00:17:40.601 "data_size": 63488 00:17:40.601 }, 00:17:40.601 { 00:17:40.601 "name": "BaseBdev2", 00:17:40.601 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:40.601 "is_configured": true, 00:17:40.601 "data_offset": 2048, 00:17:40.601 "data_size": 63488 00:17:40.601 }, 00:17:40.601 { 00:17:40.601 "name": "BaseBdev3", 00:17:40.602 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:40.602 "is_configured": true, 00:17:40.602 "data_offset": 2048, 00:17:40.602 "data_size": 63488 00:17:40.602 } 00:17:40.602 ] 00:17:40.602 }' 00:17:40.602 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.602 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:40.602 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.602 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:40.602 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:40.602 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:40.602 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:40.602 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:40.602 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.602 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:40.602 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.602 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:40.602 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.602 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.602 [2024-12-06 04:08:33.874505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:40.602 [2024-12-06 04:08:33.874746] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:40.602 [2024-12-06 04:08:33.874822] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:40.602 request: 00:17:40.602 { 00:17:40.602 "base_bdev": "BaseBdev1", 00:17:40.602 "raid_bdev": "raid_bdev1", 00:17:40.602 "method": "bdev_raid_add_base_bdev", 00:17:40.602 "req_id": 1 00:17:40.602 } 00:17:40.602 Got JSON-RPC error response 00:17:40.602 response: 00:17:40.602 { 00:17:40.602 "code": -22, 00:17:40.602 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:40.602 } 00:17:40.602 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:40.602 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:40.602 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:40.602 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:40.602 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:40.602 04:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:41.541 04:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:41.541 04:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.541 04:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.541 04:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.541 04:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.541 04:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:41.541 04:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.541 04:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.541 04:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.541 04:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.800 04:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.800 04:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.800 04:08:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.800 04:08:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.800 04:08:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.800 04:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.800 "name": "raid_bdev1", 00:17:41.800 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:41.800 "strip_size_kb": 64, 00:17:41.800 "state": "online", 00:17:41.800 "raid_level": "raid5f", 00:17:41.801 "superblock": true, 00:17:41.801 "num_base_bdevs": 3, 00:17:41.801 "num_base_bdevs_discovered": 2, 00:17:41.801 "num_base_bdevs_operational": 2, 00:17:41.801 "base_bdevs_list": [ 00:17:41.801 { 00:17:41.801 "name": null, 00:17:41.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.801 "is_configured": false, 00:17:41.801 "data_offset": 0, 00:17:41.801 "data_size": 63488 00:17:41.801 }, 00:17:41.801 { 00:17:41.801 "name": "BaseBdev2", 00:17:41.801 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:41.801 "is_configured": true, 00:17:41.801 "data_offset": 2048, 00:17:41.801 "data_size": 63488 00:17:41.801 }, 00:17:41.801 { 00:17:41.801 "name": "BaseBdev3", 00:17:41.801 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:41.801 "is_configured": true, 00:17:41.801 "data_offset": 2048, 00:17:41.801 "data_size": 63488 00:17:41.801 } 00:17:41.801 ] 00:17:41.801 }' 00:17:41.801 04:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.801 04:08:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.059 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:42.059 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.059 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:42.059 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:42.059 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.059 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.059 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.059 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.059 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.059 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.059 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.059 "name": "raid_bdev1", 00:17:42.059 "uuid": "a3728886-a0e4-458f-9a37-af17451811cc", 00:17:42.059 "strip_size_kb": 64, 00:17:42.059 "state": "online", 00:17:42.059 "raid_level": "raid5f", 00:17:42.059 "superblock": true, 00:17:42.059 "num_base_bdevs": 3, 00:17:42.059 "num_base_bdevs_discovered": 2, 00:17:42.059 "num_base_bdevs_operational": 2, 00:17:42.059 "base_bdevs_list": [ 00:17:42.059 { 00:17:42.059 "name": null, 00:17:42.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.059 "is_configured": false, 00:17:42.059 "data_offset": 0, 00:17:42.059 "data_size": 63488 00:17:42.059 }, 00:17:42.059 { 00:17:42.059 "name": "BaseBdev2", 00:17:42.059 "uuid": "a164c015-1a1b-5738-aa67-30a682a85b44", 00:17:42.059 "is_configured": true, 00:17:42.059 "data_offset": 2048, 00:17:42.059 "data_size": 63488 00:17:42.059 }, 00:17:42.059 { 00:17:42.059 "name": "BaseBdev3", 00:17:42.059 "uuid": "eda94ff8-0ba6-5e52-8b06-441256238349", 00:17:42.059 "is_configured": true, 00:17:42.059 "data_offset": 2048, 00:17:42.059 "data_size": 63488 00:17:42.059 } 00:17:42.059 ] 00:17:42.059 }' 00:17:42.059 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.059 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:42.059 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.059 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:42.059 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82294 00:17:42.059 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82294 ']' 00:17:42.059 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82294 00:17:42.060 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:42.318 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:42.318 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82294 00:17:42.318 killing process with pid 82294 00:17:42.318 Received shutdown signal, test time was about 60.000000 seconds 00:17:42.318 00:17:42.318 Latency(us) 00:17:42.319 [2024-12-06T04:08:35.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.319 [2024-12-06T04:08:35.673Z] =================================================================================================================== 00:17:42.319 [2024-12-06T04:08:35.673Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:42.319 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:42.319 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:42.319 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82294' 00:17:42.319 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82294 00:17:42.319 04:08:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82294 00:17:42.319 [2024-12-06 04:08:35.435323] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:42.319 [2024-12-06 04:08:35.435459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.319 [2024-12-06 04:08:35.435540] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:42.319 [2024-12-06 04:08:35.435555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:42.578 [2024-12-06 04:08:35.864540] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:43.958 04:08:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:43.958 00:17:43.958 real 0m23.314s 00:17:43.958 user 0m29.698s 00:17:43.958 sys 0m2.750s 00:17:43.958 04:08:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.958 04:08:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.958 ************************************ 00:17:43.958 END TEST raid5f_rebuild_test_sb 00:17:43.958 ************************************ 00:17:43.958 04:08:37 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:17:43.958 04:08:37 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:17:43.958 04:08:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:43.958 04:08:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.958 04:08:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:43.958 ************************************ 00:17:43.958 START TEST raid5f_state_function_test 00:17:43.958 ************************************ 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83047 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83047' 00:17:43.958 Process raid pid: 83047 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83047 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83047 ']' 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.958 04:08:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.958 [2024-12-06 04:08:37.288473] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:17:43.958 [2024-12-06 04:08:37.288698] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.219 [2024-12-06 04:08:37.466516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.479 [2024-12-06 04:08:37.594459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.739 [2024-12-06 04:08:37.834941] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:44.739 [2024-12-06 04:08:37.835085] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.999 [2024-12-06 04:08:38.163395] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:44.999 [2024-12-06 04:08:38.163462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:44.999 [2024-12-06 04:08:38.163475] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:44.999 [2024-12-06 04:08:38.163487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:44.999 [2024-12-06 04:08:38.163495] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:44.999 [2024-12-06 04:08:38.163506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:44.999 [2024-12-06 04:08:38.163514] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:44.999 [2024-12-06 04:08:38.163525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.999 "name": "Existed_Raid", 00:17:44.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.999 "strip_size_kb": 64, 00:17:44.999 "state": "configuring", 00:17:44.999 "raid_level": "raid5f", 00:17:44.999 "superblock": false, 00:17:44.999 "num_base_bdevs": 4, 00:17:44.999 "num_base_bdevs_discovered": 0, 00:17:44.999 "num_base_bdevs_operational": 4, 00:17:44.999 "base_bdevs_list": [ 00:17:44.999 { 00:17:44.999 "name": "BaseBdev1", 00:17:44.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.999 "is_configured": false, 00:17:44.999 "data_offset": 0, 00:17:44.999 "data_size": 0 00:17:44.999 }, 00:17:44.999 { 00:17:44.999 "name": "BaseBdev2", 00:17:44.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.999 "is_configured": false, 00:17:44.999 "data_offset": 0, 00:17:44.999 "data_size": 0 00:17:44.999 }, 00:17:44.999 { 00:17:44.999 "name": "BaseBdev3", 00:17:44.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.999 "is_configured": false, 00:17:44.999 "data_offset": 0, 00:17:44.999 "data_size": 0 00:17:44.999 }, 00:17:44.999 { 00:17:44.999 "name": "BaseBdev4", 00:17:44.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.999 "is_configured": false, 00:17:44.999 "data_offset": 0, 00:17:44.999 "data_size": 0 00:17:44.999 } 00:17:44.999 ] 00:17:44.999 }' 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.999 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.569 [2024-12-06 04:08:38.622546] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:45.569 [2024-12-06 04:08:38.622660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.569 [2024-12-06 04:08:38.634531] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:45.569 [2024-12-06 04:08:38.634625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:45.569 [2024-12-06 04:08:38.634662] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:45.569 [2024-12-06 04:08:38.634697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:45.569 [2024-12-06 04:08:38.634727] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:45.569 [2024-12-06 04:08:38.634758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:45.569 [2024-12-06 04:08:38.634786] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:45.569 [2024-12-06 04:08:38.634823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.569 [2024-12-06 04:08:38.689316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:45.569 BaseBdev1 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.569 [ 00:17:45.569 { 00:17:45.569 "name": "BaseBdev1", 00:17:45.569 "aliases": [ 00:17:45.569 "e3d3ea91-15ab-4e6f-9e9d-e4781795f1b0" 00:17:45.569 ], 00:17:45.569 "product_name": "Malloc disk", 00:17:45.569 "block_size": 512, 00:17:45.569 "num_blocks": 65536, 00:17:45.569 "uuid": "e3d3ea91-15ab-4e6f-9e9d-e4781795f1b0", 00:17:45.569 "assigned_rate_limits": { 00:17:45.569 "rw_ios_per_sec": 0, 00:17:45.569 "rw_mbytes_per_sec": 0, 00:17:45.569 "r_mbytes_per_sec": 0, 00:17:45.569 "w_mbytes_per_sec": 0 00:17:45.569 }, 00:17:45.569 "claimed": true, 00:17:45.569 "claim_type": "exclusive_write", 00:17:45.569 "zoned": false, 00:17:45.569 "supported_io_types": { 00:17:45.569 "read": true, 00:17:45.569 "write": true, 00:17:45.569 "unmap": true, 00:17:45.569 "flush": true, 00:17:45.569 "reset": true, 00:17:45.569 "nvme_admin": false, 00:17:45.569 "nvme_io": false, 00:17:45.569 "nvme_io_md": false, 00:17:45.569 "write_zeroes": true, 00:17:45.569 "zcopy": true, 00:17:45.569 "get_zone_info": false, 00:17:45.569 "zone_management": false, 00:17:45.569 "zone_append": false, 00:17:45.569 "compare": false, 00:17:45.569 "compare_and_write": false, 00:17:45.569 "abort": true, 00:17:45.569 "seek_hole": false, 00:17:45.569 "seek_data": false, 00:17:45.569 "copy": true, 00:17:45.569 "nvme_iov_md": false 00:17:45.569 }, 00:17:45.569 "memory_domains": [ 00:17:45.569 { 00:17:45.569 "dma_device_id": "system", 00:17:45.569 "dma_device_type": 1 00:17:45.569 }, 00:17:45.569 { 00:17:45.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.569 "dma_device_type": 2 00:17:45.569 } 00:17:45.569 ], 00:17:45.569 "driver_specific": {} 00:17:45.569 } 00:17:45.569 ] 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.569 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.569 "name": "Existed_Raid", 00:17:45.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.569 "strip_size_kb": 64, 00:17:45.569 "state": "configuring", 00:17:45.569 "raid_level": "raid5f", 00:17:45.569 "superblock": false, 00:17:45.569 "num_base_bdevs": 4, 00:17:45.569 "num_base_bdevs_discovered": 1, 00:17:45.569 "num_base_bdevs_operational": 4, 00:17:45.569 "base_bdevs_list": [ 00:17:45.569 { 00:17:45.569 "name": "BaseBdev1", 00:17:45.569 "uuid": "e3d3ea91-15ab-4e6f-9e9d-e4781795f1b0", 00:17:45.569 "is_configured": true, 00:17:45.569 "data_offset": 0, 00:17:45.569 "data_size": 65536 00:17:45.569 }, 00:17:45.569 { 00:17:45.569 "name": "BaseBdev2", 00:17:45.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.569 "is_configured": false, 00:17:45.569 "data_offset": 0, 00:17:45.569 "data_size": 0 00:17:45.569 }, 00:17:45.569 { 00:17:45.569 "name": "BaseBdev3", 00:17:45.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.569 "is_configured": false, 00:17:45.569 "data_offset": 0, 00:17:45.569 "data_size": 0 00:17:45.569 }, 00:17:45.569 { 00:17:45.569 "name": "BaseBdev4", 00:17:45.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.569 "is_configured": false, 00:17:45.569 "data_offset": 0, 00:17:45.569 "data_size": 0 00:17:45.569 } 00:17:45.570 ] 00:17:45.570 }' 00:17:45.570 04:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.570 04:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.138 [2024-12-06 04:08:39.212541] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:46.138 [2024-12-06 04:08:39.212614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.138 [2024-12-06 04:08:39.224588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:46.138 [2024-12-06 04:08:39.226576] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:46.138 [2024-12-06 04:08:39.226671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:46.138 [2024-12-06 04:08:39.226687] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:46.138 [2024-12-06 04:08:39.226699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:46.138 [2024-12-06 04:08:39.226707] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:46.138 [2024-12-06 04:08:39.226717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.138 "name": "Existed_Raid", 00:17:46.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.138 "strip_size_kb": 64, 00:17:46.138 "state": "configuring", 00:17:46.138 "raid_level": "raid5f", 00:17:46.138 "superblock": false, 00:17:46.138 "num_base_bdevs": 4, 00:17:46.138 "num_base_bdevs_discovered": 1, 00:17:46.138 "num_base_bdevs_operational": 4, 00:17:46.138 "base_bdevs_list": [ 00:17:46.138 { 00:17:46.138 "name": "BaseBdev1", 00:17:46.138 "uuid": "e3d3ea91-15ab-4e6f-9e9d-e4781795f1b0", 00:17:46.138 "is_configured": true, 00:17:46.138 "data_offset": 0, 00:17:46.138 "data_size": 65536 00:17:46.138 }, 00:17:46.138 { 00:17:46.138 "name": "BaseBdev2", 00:17:46.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.138 "is_configured": false, 00:17:46.138 "data_offset": 0, 00:17:46.138 "data_size": 0 00:17:46.138 }, 00:17:46.138 { 00:17:46.138 "name": "BaseBdev3", 00:17:46.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.138 "is_configured": false, 00:17:46.138 "data_offset": 0, 00:17:46.138 "data_size": 0 00:17:46.138 }, 00:17:46.138 { 00:17:46.138 "name": "BaseBdev4", 00:17:46.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.138 "is_configured": false, 00:17:46.138 "data_offset": 0, 00:17:46.138 "data_size": 0 00:17:46.138 } 00:17:46.138 ] 00:17:46.138 }' 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.138 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.398 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:46.398 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.398 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.398 [2024-12-06 04:08:39.717163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:46.398 BaseBdev2 00:17:46.398 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.398 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:46.398 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:46.398 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:46.398 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:46.398 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:46.398 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:46.398 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:46.398 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.398 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.398 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.398 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:46.398 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.398 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.398 [ 00:17:46.398 { 00:17:46.398 "name": "BaseBdev2", 00:17:46.398 "aliases": [ 00:17:46.398 "a4a1bec2-ba04-434a-8b9a-0e333b34b2a1" 00:17:46.398 ], 00:17:46.398 "product_name": "Malloc disk", 00:17:46.398 "block_size": 512, 00:17:46.398 "num_blocks": 65536, 00:17:46.398 "uuid": "a4a1bec2-ba04-434a-8b9a-0e333b34b2a1", 00:17:46.398 "assigned_rate_limits": { 00:17:46.398 "rw_ios_per_sec": 0, 00:17:46.398 "rw_mbytes_per_sec": 0, 00:17:46.398 "r_mbytes_per_sec": 0, 00:17:46.398 "w_mbytes_per_sec": 0 00:17:46.398 }, 00:17:46.398 "claimed": true, 00:17:46.398 "claim_type": "exclusive_write", 00:17:46.398 "zoned": false, 00:17:46.398 "supported_io_types": { 00:17:46.398 "read": true, 00:17:46.398 "write": true, 00:17:46.398 "unmap": true, 00:17:46.398 "flush": true, 00:17:46.398 "reset": true, 00:17:46.398 "nvme_admin": false, 00:17:46.660 "nvme_io": false, 00:17:46.660 "nvme_io_md": false, 00:17:46.660 "write_zeroes": true, 00:17:46.660 "zcopy": true, 00:17:46.660 "get_zone_info": false, 00:17:46.660 "zone_management": false, 00:17:46.660 "zone_append": false, 00:17:46.660 "compare": false, 00:17:46.660 "compare_and_write": false, 00:17:46.660 "abort": true, 00:17:46.660 "seek_hole": false, 00:17:46.660 "seek_data": false, 00:17:46.660 "copy": true, 00:17:46.660 "nvme_iov_md": false 00:17:46.660 }, 00:17:46.660 "memory_domains": [ 00:17:46.660 { 00:17:46.660 "dma_device_id": "system", 00:17:46.660 "dma_device_type": 1 00:17:46.660 }, 00:17:46.660 { 00:17:46.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.660 "dma_device_type": 2 00:17:46.660 } 00:17:46.660 ], 00:17:46.660 "driver_specific": {} 00:17:46.660 } 00:17:46.660 ] 00:17:46.660 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.660 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:46.660 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:46.660 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:46.660 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:46.660 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.660 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:46.660 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.660 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.660 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:46.660 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.660 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.660 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.660 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.660 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.660 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.660 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.660 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.660 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.660 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.660 "name": "Existed_Raid", 00:17:46.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.660 "strip_size_kb": 64, 00:17:46.660 "state": "configuring", 00:17:46.660 "raid_level": "raid5f", 00:17:46.661 "superblock": false, 00:17:46.661 "num_base_bdevs": 4, 00:17:46.661 "num_base_bdevs_discovered": 2, 00:17:46.661 "num_base_bdevs_operational": 4, 00:17:46.661 "base_bdevs_list": [ 00:17:46.661 { 00:17:46.661 "name": "BaseBdev1", 00:17:46.661 "uuid": "e3d3ea91-15ab-4e6f-9e9d-e4781795f1b0", 00:17:46.661 "is_configured": true, 00:17:46.661 "data_offset": 0, 00:17:46.661 "data_size": 65536 00:17:46.661 }, 00:17:46.661 { 00:17:46.661 "name": "BaseBdev2", 00:17:46.661 "uuid": "a4a1bec2-ba04-434a-8b9a-0e333b34b2a1", 00:17:46.661 "is_configured": true, 00:17:46.661 "data_offset": 0, 00:17:46.661 "data_size": 65536 00:17:46.661 }, 00:17:46.661 { 00:17:46.661 "name": "BaseBdev3", 00:17:46.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.661 "is_configured": false, 00:17:46.661 "data_offset": 0, 00:17:46.661 "data_size": 0 00:17:46.661 }, 00:17:46.661 { 00:17:46.661 "name": "BaseBdev4", 00:17:46.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.661 "is_configured": false, 00:17:46.661 "data_offset": 0, 00:17:46.661 "data_size": 0 00:17:46.661 } 00:17:46.661 ] 00:17:46.661 }' 00:17:46.661 04:08:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.661 04:08:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.919 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:46.919 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.919 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.179 [2024-12-06 04:08:40.307092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:47.179 BaseBdev3 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.179 [ 00:17:47.179 { 00:17:47.179 "name": "BaseBdev3", 00:17:47.179 "aliases": [ 00:17:47.179 "841cc1a1-554b-43ff-afbf-6c014013b4fe" 00:17:47.179 ], 00:17:47.179 "product_name": "Malloc disk", 00:17:47.179 "block_size": 512, 00:17:47.179 "num_blocks": 65536, 00:17:47.179 "uuid": "841cc1a1-554b-43ff-afbf-6c014013b4fe", 00:17:47.179 "assigned_rate_limits": { 00:17:47.179 "rw_ios_per_sec": 0, 00:17:47.179 "rw_mbytes_per_sec": 0, 00:17:47.179 "r_mbytes_per_sec": 0, 00:17:47.179 "w_mbytes_per_sec": 0 00:17:47.179 }, 00:17:47.179 "claimed": true, 00:17:47.179 "claim_type": "exclusive_write", 00:17:47.179 "zoned": false, 00:17:47.179 "supported_io_types": { 00:17:47.179 "read": true, 00:17:47.179 "write": true, 00:17:47.179 "unmap": true, 00:17:47.179 "flush": true, 00:17:47.179 "reset": true, 00:17:47.179 "nvme_admin": false, 00:17:47.179 "nvme_io": false, 00:17:47.179 "nvme_io_md": false, 00:17:47.179 "write_zeroes": true, 00:17:47.179 "zcopy": true, 00:17:47.179 "get_zone_info": false, 00:17:47.179 "zone_management": false, 00:17:47.179 "zone_append": false, 00:17:47.179 "compare": false, 00:17:47.179 "compare_and_write": false, 00:17:47.179 "abort": true, 00:17:47.179 "seek_hole": false, 00:17:47.179 "seek_data": false, 00:17:47.179 "copy": true, 00:17:47.179 "nvme_iov_md": false 00:17:47.179 }, 00:17:47.179 "memory_domains": [ 00:17:47.179 { 00:17:47.179 "dma_device_id": "system", 00:17:47.179 "dma_device_type": 1 00:17:47.179 }, 00:17:47.179 { 00:17:47.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.179 "dma_device_type": 2 00:17:47.179 } 00:17:47.179 ], 00:17:47.179 "driver_specific": {} 00:17:47.179 } 00:17:47.179 ] 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.179 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.179 "name": "Existed_Raid", 00:17:47.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.179 "strip_size_kb": 64, 00:17:47.179 "state": "configuring", 00:17:47.179 "raid_level": "raid5f", 00:17:47.179 "superblock": false, 00:17:47.179 "num_base_bdevs": 4, 00:17:47.179 "num_base_bdevs_discovered": 3, 00:17:47.180 "num_base_bdevs_operational": 4, 00:17:47.180 "base_bdevs_list": [ 00:17:47.180 { 00:17:47.180 "name": "BaseBdev1", 00:17:47.180 "uuid": "e3d3ea91-15ab-4e6f-9e9d-e4781795f1b0", 00:17:47.180 "is_configured": true, 00:17:47.180 "data_offset": 0, 00:17:47.180 "data_size": 65536 00:17:47.180 }, 00:17:47.180 { 00:17:47.180 "name": "BaseBdev2", 00:17:47.180 "uuid": "a4a1bec2-ba04-434a-8b9a-0e333b34b2a1", 00:17:47.180 "is_configured": true, 00:17:47.180 "data_offset": 0, 00:17:47.180 "data_size": 65536 00:17:47.180 }, 00:17:47.180 { 00:17:47.180 "name": "BaseBdev3", 00:17:47.180 "uuid": "841cc1a1-554b-43ff-afbf-6c014013b4fe", 00:17:47.180 "is_configured": true, 00:17:47.180 "data_offset": 0, 00:17:47.180 "data_size": 65536 00:17:47.180 }, 00:17:47.180 { 00:17:47.180 "name": "BaseBdev4", 00:17:47.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.180 "is_configured": false, 00:17:47.180 "data_offset": 0, 00:17:47.180 "data_size": 0 00:17:47.180 } 00:17:47.180 ] 00:17:47.180 }' 00:17:47.180 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.180 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.747 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:47.747 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.747 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.747 [2024-12-06 04:08:40.852473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:47.747 [2024-12-06 04:08:40.852561] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:47.747 [2024-12-06 04:08:40.852572] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:47.747 [2024-12-06 04:08:40.852836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:47.747 [2024-12-06 04:08:40.860911] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:47.747 [2024-12-06 04:08:40.860941] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:47.747 [2024-12-06 04:08:40.861285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.747 BaseBdev4 00:17:47.747 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.747 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:47.747 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:47.747 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:47.747 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:47.747 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:47.747 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:47.747 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:47.747 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.747 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.747 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.747 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:47.747 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.747 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.747 [ 00:17:47.747 { 00:17:47.747 "name": "BaseBdev4", 00:17:47.747 "aliases": [ 00:17:47.747 "7674ae14-76fc-4a4d-ac66-b09e210cbcd0" 00:17:47.747 ], 00:17:47.747 "product_name": "Malloc disk", 00:17:47.747 "block_size": 512, 00:17:47.747 "num_blocks": 65536, 00:17:47.747 "uuid": "7674ae14-76fc-4a4d-ac66-b09e210cbcd0", 00:17:47.747 "assigned_rate_limits": { 00:17:47.747 "rw_ios_per_sec": 0, 00:17:47.748 "rw_mbytes_per_sec": 0, 00:17:47.748 "r_mbytes_per_sec": 0, 00:17:47.748 "w_mbytes_per_sec": 0 00:17:47.748 }, 00:17:47.748 "claimed": true, 00:17:47.748 "claim_type": "exclusive_write", 00:17:47.748 "zoned": false, 00:17:47.748 "supported_io_types": { 00:17:47.748 "read": true, 00:17:47.748 "write": true, 00:17:47.748 "unmap": true, 00:17:47.748 "flush": true, 00:17:47.748 "reset": true, 00:17:47.748 "nvme_admin": false, 00:17:47.748 "nvme_io": false, 00:17:47.748 "nvme_io_md": false, 00:17:47.748 "write_zeroes": true, 00:17:47.748 "zcopy": true, 00:17:47.748 "get_zone_info": false, 00:17:47.748 "zone_management": false, 00:17:47.748 "zone_append": false, 00:17:47.748 "compare": false, 00:17:47.748 "compare_and_write": false, 00:17:47.748 "abort": true, 00:17:47.748 "seek_hole": false, 00:17:47.748 "seek_data": false, 00:17:47.748 "copy": true, 00:17:47.748 "nvme_iov_md": false 00:17:47.748 }, 00:17:47.748 "memory_domains": [ 00:17:47.748 { 00:17:47.748 "dma_device_id": "system", 00:17:47.748 "dma_device_type": 1 00:17:47.748 }, 00:17:47.748 { 00:17:47.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.748 "dma_device_type": 2 00:17:47.748 } 00:17:47.748 ], 00:17:47.748 "driver_specific": {} 00:17:47.748 } 00:17:47.748 ] 00:17:47.748 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.748 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:47.748 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:47.748 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:47.748 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:47.748 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.748 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.748 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:47.748 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.748 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:47.748 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.748 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.748 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.748 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.748 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.748 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.748 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.748 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.748 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.748 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.748 "name": "Existed_Raid", 00:17:47.748 "uuid": "381a90c7-b602-44a9-a7e2-a9472e535474", 00:17:47.748 "strip_size_kb": 64, 00:17:47.748 "state": "online", 00:17:47.748 "raid_level": "raid5f", 00:17:47.748 "superblock": false, 00:17:47.748 "num_base_bdevs": 4, 00:17:47.748 "num_base_bdevs_discovered": 4, 00:17:47.748 "num_base_bdevs_operational": 4, 00:17:47.748 "base_bdevs_list": [ 00:17:47.748 { 00:17:47.748 "name": "BaseBdev1", 00:17:47.748 "uuid": "e3d3ea91-15ab-4e6f-9e9d-e4781795f1b0", 00:17:47.748 "is_configured": true, 00:17:47.748 "data_offset": 0, 00:17:47.748 "data_size": 65536 00:17:47.748 }, 00:17:47.748 { 00:17:47.748 "name": "BaseBdev2", 00:17:47.748 "uuid": "a4a1bec2-ba04-434a-8b9a-0e333b34b2a1", 00:17:47.748 "is_configured": true, 00:17:47.748 "data_offset": 0, 00:17:47.748 "data_size": 65536 00:17:47.748 }, 00:17:47.748 { 00:17:47.748 "name": "BaseBdev3", 00:17:47.748 "uuid": "841cc1a1-554b-43ff-afbf-6c014013b4fe", 00:17:47.748 "is_configured": true, 00:17:47.748 "data_offset": 0, 00:17:47.748 "data_size": 65536 00:17:47.748 }, 00:17:47.748 { 00:17:47.748 "name": "BaseBdev4", 00:17:47.748 "uuid": "7674ae14-76fc-4a4d-ac66-b09e210cbcd0", 00:17:47.748 "is_configured": true, 00:17:47.748 "data_offset": 0, 00:17:47.748 "data_size": 65536 00:17:47.748 } 00:17:47.748 ] 00:17:47.748 }' 00:17:47.748 04:08:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.748 04:08:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:48.317 [2024-12-06 04:08:41.378209] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:48.317 "name": "Existed_Raid", 00:17:48.317 "aliases": [ 00:17:48.317 "381a90c7-b602-44a9-a7e2-a9472e535474" 00:17:48.317 ], 00:17:48.317 "product_name": "Raid Volume", 00:17:48.317 "block_size": 512, 00:17:48.317 "num_blocks": 196608, 00:17:48.317 "uuid": "381a90c7-b602-44a9-a7e2-a9472e535474", 00:17:48.317 "assigned_rate_limits": { 00:17:48.317 "rw_ios_per_sec": 0, 00:17:48.317 "rw_mbytes_per_sec": 0, 00:17:48.317 "r_mbytes_per_sec": 0, 00:17:48.317 "w_mbytes_per_sec": 0 00:17:48.317 }, 00:17:48.317 "claimed": false, 00:17:48.317 "zoned": false, 00:17:48.317 "supported_io_types": { 00:17:48.317 "read": true, 00:17:48.317 "write": true, 00:17:48.317 "unmap": false, 00:17:48.317 "flush": false, 00:17:48.317 "reset": true, 00:17:48.317 "nvme_admin": false, 00:17:48.317 "nvme_io": false, 00:17:48.317 "nvme_io_md": false, 00:17:48.317 "write_zeroes": true, 00:17:48.317 "zcopy": false, 00:17:48.317 "get_zone_info": false, 00:17:48.317 "zone_management": false, 00:17:48.317 "zone_append": false, 00:17:48.317 "compare": false, 00:17:48.317 "compare_and_write": false, 00:17:48.317 "abort": false, 00:17:48.317 "seek_hole": false, 00:17:48.317 "seek_data": false, 00:17:48.317 "copy": false, 00:17:48.317 "nvme_iov_md": false 00:17:48.317 }, 00:17:48.317 "driver_specific": { 00:17:48.317 "raid": { 00:17:48.317 "uuid": "381a90c7-b602-44a9-a7e2-a9472e535474", 00:17:48.317 "strip_size_kb": 64, 00:17:48.317 "state": "online", 00:17:48.317 "raid_level": "raid5f", 00:17:48.317 "superblock": false, 00:17:48.317 "num_base_bdevs": 4, 00:17:48.317 "num_base_bdevs_discovered": 4, 00:17:48.317 "num_base_bdevs_operational": 4, 00:17:48.317 "base_bdevs_list": [ 00:17:48.317 { 00:17:48.317 "name": "BaseBdev1", 00:17:48.317 "uuid": "e3d3ea91-15ab-4e6f-9e9d-e4781795f1b0", 00:17:48.317 "is_configured": true, 00:17:48.317 "data_offset": 0, 00:17:48.317 "data_size": 65536 00:17:48.317 }, 00:17:48.317 { 00:17:48.317 "name": "BaseBdev2", 00:17:48.317 "uuid": "a4a1bec2-ba04-434a-8b9a-0e333b34b2a1", 00:17:48.317 "is_configured": true, 00:17:48.317 "data_offset": 0, 00:17:48.317 "data_size": 65536 00:17:48.317 }, 00:17:48.317 { 00:17:48.317 "name": "BaseBdev3", 00:17:48.317 "uuid": "841cc1a1-554b-43ff-afbf-6c014013b4fe", 00:17:48.317 "is_configured": true, 00:17:48.317 "data_offset": 0, 00:17:48.317 "data_size": 65536 00:17:48.317 }, 00:17:48.317 { 00:17:48.317 "name": "BaseBdev4", 00:17:48.317 "uuid": "7674ae14-76fc-4a4d-ac66-b09e210cbcd0", 00:17:48.317 "is_configured": true, 00:17:48.317 "data_offset": 0, 00:17:48.317 "data_size": 65536 00:17:48.317 } 00:17:48.317 ] 00:17:48.317 } 00:17:48.317 } 00:17:48.317 }' 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:48.317 BaseBdev2 00:17:48.317 BaseBdev3 00:17:48.317 BaseBdev4' 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.317 04:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.577 [2024-12-06 04:08:41.713471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.577 "name": "Existed_Raid", 00:17:48.577 "uuid": "381a90c7-b602-44a9-a7e2-a9472e535474", 00:17:48.577 "strip_size_kb": 64, 00:17:48.577 "state": "online", 00:17:48.577 "raid_level": "raid5f", 00:17:48.577 "superblock": false, 00:17:48.577 "num_base_bdevs": 4, 00:17:48.577 "num_base_bdevs_discovered": 3, 00:17:48.577 "num_base_bdevs_operational": 3, 00:17:48.577 "base_bdevs_list": [ 00:17:48.577 { 00:17:48.577 "name": null, 00:17:48.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.577 "is_configured": false, 00:17:48.577 "data_offset": 0, 00:17:48.577 "data_size": 65536 00:17:48.577 }, 00:17:48.577 { 00:17:48.577 "name": "BaseBdev2", 00:17:48.577 "uuid": "a4a1bec2-ba04-434a-8b9a-0e333b34b2a1", 00:17:48.577 "is_configured": true, 00:17:48.577 "data_offset": 0, 00:17:48.577 "data_size": 65536 00:17:48.577 }, 00:17:48.577 { 00:17:48.577 "name": "BaseBdev3", 00:17:48.577 "uuid": "841cc1a1-554b-43ff-afbf-6c014013b4fe", 00:17:48.577 "is_configured": true, 00:17:48.577 "data_offset": 0, 00:17:48.577 "data_size": 65536 00:17:48.577 }, 00:17:48.577 { 00:17:48.577 "name": "BaseBdev4", 00:17:48.577 "uuid": "7674ae14-76fc-4a4d-ac66-b09e210cbcd0", 00:17:48.577 "is_configured": true, 00:17:48.577 "data_offset": 0, 00:17:48.577 "data_size": 65536 00:17:48.577 } 00:17:48.577 ] 00:17:48.577 }' 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.577 04:08:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.156 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:49.156 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:49.156 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.156 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.156 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.156 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:49.156 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.156 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:49.156 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:49.156 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:49.156 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.156 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.156 [2024-12-06 04:08:42.309962] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:49.156 [2024-12-06 04:08:42.310087] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:49.157 [2024-12-06 04:08:42.413232] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.157 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.157 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:49.157 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:49.157 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.157 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.157 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:49.157 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.157 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.157 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:49.157 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:49.157 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:49.157 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.157 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.157 [2024-12-06 04:08:42.473225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:49.415 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.415 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:49.415 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:49.415 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.415 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:49.415 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.415 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.415 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.415 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:49.415 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:49.415 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:49.415 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.415 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.415 [2024-12-06 04:08:42.639254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:49.415 [2024-12-06 04:08:42.639314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:49.415 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.415 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:49.415 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:49.415 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.415 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:49.415 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.415 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.415 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.675 BaseBdev2 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.675 [ 00:17:49.675 { 00:17:49.675 "name": "BaseBdev2", 00:17:49.675 "aliases": [ 00:17:49.675 "d3f4f801-5005-4828-8496-9734e8ff2ca6" 00:17:49.675 ], 00:17:49.675 "product_name": "Malloc disk", 00:17:49.675 "block_size": 512, 00:17:49.675 "num_blocks": 65536, 00:17:49.675 "uuid": "d3f4f801-5005-4828-8496-9734e8ff2ca6", 00:17:49.675 "assigned_rate_limits": { 00:17:49.675 "rw_ios_per_sec": 0, 00:17:49.675 "rw_mbytes_per_sec": 0, 00:17:49.675 "r_mbytes_per_sec": 0, 00:17:49.675 "w_mbytes_per_sec": 0 00:17:49.675 }, 00:17:49.675 "claimed": false, 00:17:49.675 "zoned": false, 00:17:49.675 "supported_io_types": { 00:17:49.675 "read": true, 00:17:49.675 "write": true, 00:17:49.675 "unmap": true, 00:17:49.675 "flush": true, 00:17:49.675 "reset": true, 00:17:49.675 "nvme_admin": false, 00:17:49.675 "nvme_io": false, 00:17:49.675 "nvme_io_md": false, 00:17:49.675 "write_zeroes": true, 00:17:49.675 "zcopy": true, 00:17:49.675 "get_zone_info": false, 00:17:49.675 "zone_management": false, 00:17:49.675 "zone_append": false, 00:17:49.675 "compare": false, 00:17:49.675 "compare_and_write": false, 00:17:49.675 "abort": true, 00:17:49.675 "seek_hole": false, 00:17:49.675 "seek_data": false, 00:17:49.675 "copy": true, 00:17:49.675 "nvme_iov_md": false 00:17:49.675 }, 00:17:49.675 "memory_domains": [ 00:17:49.675 { 00:17:49.675 "dma_device_id": "system", 00:17:49.675 "dma_device_type": 1 00:17:49.675 }, 00:17:49.675 { 00:17:49.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.675 "dma_device_type": 2 00:17:49.675 } 00:17:49.675 ], 00:17:49.675 "driver_specific": {} 00:17:49.675 } 00:17:49.675 ] 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.675 BaseBdev3 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.675 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.675 [ 00:17:49.675 { 00:17:49.675 "name": "BaseBdev3", 00:17:49.675 "aliases": [ 00:17:49.675 "61836e04-8c58-4200-aabd-de598c6ef709" 00:17:49.675 ], 00:17:49.675 "product_name": "Malloc disk", 00:17:49.675 "block_size": 512, 00:17:49.675 "num_blocks": 65536, 00:17:49.675 "uuid": "61836e04-8c58-4200-aabd-de598c6ef709", 00:17:49.675 "assigned_rate_limits": { 00:17:49.675 "rw_ios_per_sec": 0, 00:17:49.675 "rw_mbytes_per_sec": 0, 00:17:49.675 "r_mbytes_per_sec": 0, 00:17:49.675 "w_mbytes_per_sec": 0 00:17:49.675 }, 00:17:49.675 "claimed": false, 00:17:49.675 "zoned": false, 00:17:49.675 "supported_io_types": { 00:17:49.675 "read": true, 00:17:49.675 "write": true, 00:17:49.675 "unmap": true, 00:17:49.675 "flush": true, 00:17:49.675 "reset": true, 00:17:49.675 "nvme_admin": false, 00:17:49.675 "nvme_io": false, 00:17:49.675 "nvme_io_md": false, 00:17:49.675 "write_zeroes": true, 00:17:49.675 "zcopy": true, 00:17:49.675 "get_zone_info": false, 00:17:49.675 "zone_management": false, 00:17:49.675 "zone_append": false, 00:17:49.675 "compare": false, 00:17:49.675 "compare_and_write": false, 00:17:49.675 "abort": true, 00:17:49.675 "seek_hole": false, 00:17:49.675 "seek_data": false, 00:17:49.675 "copy": true, 00:17:49.675 "nvme_iov_md": false 00:17:49.676 }, 00:17:49.676 "memory_domains": [ 00:17:49.676 { 00:17:49.676 "dma_device_id": "system", 00:17:49.676 "dma_device_type": 1 00:17:49.676 }, 00:17:49.676 { 00:17:49.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.676 "dma_device_type": 2 00:17:49.676 } 00:17:49.676 ], 00:17:49.676 "driver_specific": {} 00:17:49.676 } 00:17:49.676 ] 00:17:49.676 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.676 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:49.676 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:49.676 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:49.676 04:08:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:49.676 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.676 04:08:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.676 BaseBdev4 00:17:49.676 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.676 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:49.676 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:49.676 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:49.676 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:49.676 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:49.676 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:49.676 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:49.676 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.676 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.935 [ 00:17:49.935 { 00:17:49.935 "name": "BaseBdev4", 00:17:49.935 "aliases": [ 00:17:49.935 "e4e64a6e-84f9-426c-9512-83d281d1527c" 00:17:49.935 ], 00:17:49.935 "product_name": "Malloc disk", 00:17:49.935 "block_size": 512, 00:17:49.935 "num_blocks": 65536, 00:17:49.935 "uuid": "e4e64a6e-84f9-426c-9512-83d281d1527c", 00:17:49.935 "assigned_rate_limits": { 00:17:49.935 "rw_ios_per_sec": 0, 00:17:49.935 "rw_mbytes_per_sec": 0, 00:17:49.935 "r_mbytes_per_sec": 0, 00:17:49.935 "w_mbytes_per_sec": 0 00:17:49.935 }, 00:17:49.935 "claimed": false, 00:17:49.935 "zoned": false, 00:17:49.935 "supported_io_types": { 00:17:49.935 "read": true, 00:17:49.935 "write": true, 00:17:49.935 "unmap": true, 00:17:49.935 "flush": true, 00:17:49.935 "reset": true, 00:17:49.935 "nvme_admin": false, 00:17:49.935 "nvme_io": false, 00:17:49.935 "nvme_io_md": false, 00:17:49.935 "write_zeroes": true, 00:17:49.935 "zcopy": true, 00:17:49.935 "get_zone_info": false, 00:17:49.935 "zone_management": false, 00:17:49.935 "zone_append": false, 00:17:49.935 "compare": false, 00:17:49.935 "compare_and_write": false, 00:17:49.935 "abort": true, 00:17:49.935 "seek_hole": false, 00:17:49.935 "seek_data": false, 00:17:49.935 "copy": true, 00:17:49.935 "nvme_iov_md": false 00:17:49.935 }, 00:17:49.935 "memory_domains": [ 00:17:49.935 { 00:17:49.935 "dma_device_id": "system", 00:17:49.935 "dma_device_type": 1 00:17:49.935 }, 00:17:49.935 { 00:17:49.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.935 "dma_device_type": 2 00:17:49.935 } 00:17:49.935 ], 00:17:49.935 "driver_specific": {} 00:17:49.935 } 00:17:49.935 ] 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.935 [2024-12-06 04:08:43.056145] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:49.935 [2024-12-06 04:08:43.056192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:49.935 [2024-12-06 04:08:43.056218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:49.935 [2024-12-06 04:08:43.058376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:49.935 [2024-12-06 04:08:43.058440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.935 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.936 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.936 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.936 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.936 "name": "Existed_Raid", 00:17:49.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.936 "strip_size_kb": 64, 00:17:49.936 "state": "configuring", 00:17:49.936 "raid_level": "raid5f", 00:17:49.936 "superblock": false, 00:17:49.936 "num_base_bdevs": 4, 00:17:49.936 "num_base_bdevs_discovered": 3, 00:17:49.936 "num_base_bdevs_operational": 4, 00:17:49.936 "base_bdevs_list": [ 00:17:49.936 { 00:17:49.936 "name": "BaseBdev1", 00:17:49.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.936 "is_configured": false, 00:17:49.936 "data_offset": 0, 00:17:49.936 "data_size": 0 00:17:49.936 }, 00:17:49.936 { 00:17:49.936 "name": "BaseBdev2", 00:17:49.936 "uuid": "d3f4f801-5005-4828-8496-9734e8ff2ca6", 00:17:49.936 "is_configured": true, 00:17:49.936 "data_offset": 0, 00:17:49.936 "data_size": 65536 00:17:49.936 }, 00:17:49.936 { 00:17:49.936 "name": "BaseBdev3", 00:17:49.936 "uuid": "61836e04-8c58-4200-aabd-de598c6ef709", 00:17:49.936 "is_configured": true, 00:17:49.936 "data_offset": 0, 00:17:49.936 "data_size": 65536 00:17:49.936 }, 00:17:49.936 { 00:17:49.936 "name": "BaseBdev4", 00:17:49.936 "uuid": "e4e64a6e-84f9-426c-9512-83d281d1527c", 00:17:49.936 "is_configured": true, 00:17:49.936 "data_offset": 0, 00:17:49.936 "data_size": 65536 00:17:49.936 } 00:17:49.936 ] 00:17:49.936 }' 00:17:49.936 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.936 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.195 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:50.195 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.195 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.195 [2024-12-06 04:08:43.511364] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:50.195 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.195 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:50.195 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.196 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.196 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.196 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.196 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:50.196 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.196 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.196 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.196 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.196 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.196 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.196 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.196 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.196 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.455 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.455 "name": "Existed_Raid", 00:17:50.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.455 "strip_size_kb": 64, 00:17:50.455 "state": "configuring", 00:17:50.455 "raid_level": "raid5f", 00:17:50.455 "superblock": false, 00:17:50.455 "num_base_bdevs": 4, 00:17:50.455 "num_base_bdevs_discovered": 2, 00:17:50.455 "num_base_bdevs_operational": 4, 00:17:50.455 "base_bdevs_list": [ 00:17:50.455 { 00:17:50.455 "name": "BaseBdev1", 00:17:50.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.455 "is_configured": false, 00:17:50.455 "data_offset": 0, 00:17:50.455 "data_size": 0 00:17:50.455 }, 00:17:50.455 { 00:17:50.455 "name": null, 00:17:50.455 "uuid": "d3f4f801-5005-4828-8496-9734e8ff2ca6", 00:17:50.455 "is_configured": false, 00:17:50.455 "data_offset": 0, 00:17:50.455 "data_size": 65536 00:17:50.455 }, 00:17:50.455 { 00:17:50.455 "name": "BaseBdev3", 00:17:50.455 "uuid": "61836e04-8c58-4200-aabd-de598c6ef709", 00:17:50.455 "is_configured": true, 00:17:50.455 "data_offset": 0, 00:17:50.455 "data_size": 65536 00:17:50.455 }, 00:17:50.455 { 00:17:50.455 "name": "BaseBdev4", 00:17:50.455 "uuid": "e4e64a6e-84f9-426c-9512-83d281d1527c", 00:17:50.455 "is_configured": true, 00:17:50.455 "data_offset": 0, 00:17:50.455 "data_size": 65536 00:17:50.455 } 00:17:50.455 ] 00:17:50.455 }' 00:17:50.455 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.455 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.715 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.715 04:08:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:50.715 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.715 04:08:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.715 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.715 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:50.715 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:50.715 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.715 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.974 [2024-12-06 04:08:44.083876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:50.975 BaseBdev1 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.975 [ 00:17:50.975 { 00:17:50.975 "name": "BaseBdev1", 00:17:50.975 "aliases": [ 00:17:50.975 "2c6ca2f1-732a-4086-ab39-f0c9b76a85e3" 00:17:50.975 ], 00:17:50.975 "product_name": "Malloc disk", 00:17:50.975 "block_size": 512, 00:17:50.975 "num_blocks": 65536, 00:17:50.975 "uuid": "2c6ca2f1-732a-4086-ab39-f0c9b76a85e3", 00:17:50.975 "assigned_rate_limits": { 00:17:50.975 "rw_ios_per_sec": 0, 00:17:50.975 "rw_mbytes_per_sec": 0, 00:17:50.975 "r_mbytes_per_sec": 0, 00:17:50.975 "w_mbytes_per_sec": 0 00:17:50.975 }, 00:17:50.975 "claimed": true, 00:17:50.975 "claim_type": "exclusive_write", 00:17:50.975 "zoned": false, 00:17:50.975 "supported_io_types": { 00:17:50.975 "read": true, 00:17:50.975 "write": true, 00:17:50.975 "unmap": true, 00:17:50.975 "flush": true, 00:17:50.975 "reset": true, 00:17:50.975 "nvme_admin": false, 00:17:50.975 "nvme_io": false, 00:17:50.975 "nvme_io_md": false, 00:17:50.975 "write_zeroes": true, 00:17:50.975 "zcopy": true, 00:17:50.975 "get_zone_info": false, 00:17:50.975 "zone_management": false, 00:17:50.975 "zone_append": false, 00:17:50.975 "compare": false, 00:17:50.975 "compare_and_write": false, 00:17:50.975 "abort": true, 00:17:50.975 "seek_hole": false, 00:17:50.975 "seek_data": false, 00:17:50.975 "copy": true, 00:17:50.975 "nvme_iov_md": false 00:17:50.975 }, 00:17:50.975 "memory_domains": [ 00:17:50.975 { 00:17:50.975 "dma_device_id": "system", 00:17:50.975 "dma_device_type": 1 00:17:50.975 }, 00:17:50.975 { 00:17:50.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.975 "dma_device_type": 2 00:17:50.975 } 00:17:50.975 ], 00:17:50.975 "driver_specific": {} 00:17:50.975 } 00:17:50.975 ] 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.975 "name": "Existed_Raid", 00:17:50.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.975 "strip_size_kb": 64, 00:17:50.975 "state": "configuring", 00:17:50.975 "raid_level": "raid5f", 00:17:50.975 "superblock": false, 00:17:50.975 "num_base_bdevs": 4, 00:17:50.975 "num_base_bdevs_discovered": 3, 00:17:50.975 "num_base_bdevs_operational": 4, 00:17:50.975 "base_bdevs_list": [ 00:17:50.975 { 00:17:50.975 "name": "BaseBdev1", 00:17:50.975 "uuid": "2c6ca2f1-732a-4086-ab39-f0c9b76a85e3", 00:17:50.975 "is_configured": true, 00:17:50.975 "data_offset": 0, 00:17:50.975 "data_size": 65536 00:17:50.975 }, 00:17:50.975 { 00:17:50.975 "name": null, 00:17:50.975 "uuid": "d3f4f801-5005-4828-8496-9734e8ff2ca6", 00:17:50.975 "is_configured": false, 00:17:50.975 "data_offset": 0, 00:17:50.975 "data_size": 65536 00:17:50.975 }, 00:17:50.975 { 00:17:50.975 "name": "BaseBdev3", 00:17:50.975 "uuid": "61836e04-8c58-4200-aabd-de598c6ef709", 00:17:50.975 "is_configured": true, 00:17:50.975 "data_offset": 0, 00:17:50.975 "data_size": 65536 00:17:50.975 }, 00:17:50.975 { 00:17:50.975 "name": "BaseBdev4", 00:17:50.975 "uuid": "e4e64a6e-84f9-426c-9512-83d281d1527c", 00:17:50.975 "is_configured": true, 00:17:50.975 "data_offset": 0, 00:17:50.975 "data_size": 65536 00:17:50.975 } 00:17:50.975 ] 00:17:50.975 }' 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.975 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.235 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.235 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.494 [2024-12-06 04:08:44.639067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.494 "name": "Existed_Raid", 00:17:51.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.494 "strip_size_kb": 64, 00:17:51.494 "state": "configuring", 00:17:51.494 "raid_level": "raid5f", 00:17:51.494 "superblock": false, 00:17:51.494 "num_base_bdevs": 4, 00:17:51.494 "num_base_bdevs_discovered": 2, 00:17:51.494 "num_base_bdevs_operational": 4, 00:17:51.494 "base_bdevs_list": [ 00:17:51.494 { 00:17:51.494 "name": "BaseBdev1", 00:17:51.494 "uuid": "2c6ca2f1-732a-4086-ab39-f0c9b76a85e3", 00:17:51.494 "is_configured": true, 00:17:51.494 "data_offset": 0, 00:17:51.494 "data_size": 65536 00:17:51.494 }, 00:17:51.494 { 00:17:51.494 "name": null, 00:17:51.494 "uuid": "d3f4f801-5005-4828-8496-9734e8ff2ca6", 00:17:51.494 "is_configured": false, 00:17:51.494 "data_offset": 0, 00:17:51.494 "data_size": 65536 00:17:51.494 }, 00:17:51.494 { 00:17:51.494 "name": null, 00:17:51.494 "uuid": "61836e04-8c58-4200-aabd-de598c6ef709", 00:17:51.494 "is_configured": false, 00:17:51.494 "data_offset": 0, 00:17:51.494 "data_size": 65536 00:17:51.494 }, 00:17:51.494 { 00:17:51.494 "name": "BaseBdev4", 00:17:51.494 "uuid": "e4e64a6e-84f9-426c-9512-83d281d1527c", 00:17:51.494 "is_configured": true, 00:17:51.494 "data_offset": 0, 00:17:51.494 "data_size": 65536 00:17:51.494 } 00:17:51.494 ] 00:17:51.494 }' 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.494 04:08:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.752 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.753 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:51.753 04:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.753 04:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.010 04:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.010 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:52.010 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:52.010 04:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.010 04:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.010 [2024-12-06 04:08:45.142212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:52.010 04:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.010 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:52.010 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.010 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:52.010 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.010 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.010 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:52.010 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.010 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.010 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.010 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.010 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.010 04:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.010 04:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.010 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.010 04:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.010 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.010 "name": "Existed_Raid", 00:17:52.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.010 "strip_size_kb": 64, 00:17:52.010 "state": "configuring", 00:17:52.010 "raid_level": "raid5f", 00:17:52.010 "superblock": false, 00:17:52.010 "num_base_bdevs": 4, 00:17:52.010 "num_base_bdevs_discovered": 3, 00:17:52.010 "num_base_bdevs_operational": 4, 00:17:52.010 "base_bdevs_list": [ 00:17:52.010 { 00:17:52.010 "name": "BaseBdev1", 00:17:52.010 "uuid": "2c6ca2f1-732a-4086-ab39-f0c9b76a85e3", 00:17:52.010 "is_configured": true, 00:17:52.010 "data_offset": 0, 00:17:52.010 "data_size": 65536 00:17:52.010 }, 00:17:52.011 { 00:17:52.011 "name": null, 00:17:52.011 "uuid": "d3f4f801-5005-4828-8496-9734e8ff2ca6", 00:17:52.011 "is_configured": false, 00:17:52.011 "data_offset": 0, 00:17:52.011 "data_size": 65536 00:17:52.011 }, 00:17:52.011 { 00:17:52.011 "name": "BaseBdev3", 00:17:52.011 "uuid": "61836e04-8c58-4200-aabd-de598c6ef709", 00:17:52.011 "is_configured": true, 00:17:52.011 "data_offset": 0, 00:17:52.011 "data_size": 65536 00:17:52.011 }, 00:17:52.011 { 00:17:52.011 "name": "BaseBdev4", 00:17:52.011 "uuid": "e4e64a6e-84f9-426c-9512-83d281d1527c", 00:17:52.011 "is_configured": true, 00:17:52.011 "data_offset": 0, 00:17:52.011 "data_size": 65536 00:17:52.011 } 00:17:52.011 ] 00:17:52.011 }' 00:17:52.011 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.011 04:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.268 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.268 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:52.268 04:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.268 04:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.268 04:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.527 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:52.527 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:52.527 04:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.527 04:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.527 [2024-12-06 04:08:45.637420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:52.527 04:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.527 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:52.527 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.527 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:52.527 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.527 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.527 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:52.527 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.528 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.528 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.528 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.528 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.528 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.528 04:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.528 04:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.528 04:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.528 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.528 "name": "Existed_Raid", 00:17:52.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.528 "strip_size_kb": 64, 00:17:52.528 "state": "configuring", 00:17:52.528 "raid_level": "raid5f", 00:17:52.528 "superblock": false, 00:17:52.528 "num_base_bdevs": 4, 00:17:52.528 "num_base_bdevs_discovered": 2, 00:17:52.528 "num_base_bdevs_operational": 4, 00:17:52.528 "base_bdevs_list": [ 00:17:52.528 { 00:17:52.528 "name": null, 00:17:52.528 "uuid": "2c6ca2f1-732a-4086-ab39-f0c9b76a85e3", 00:17:52.528 "is_configured": false, 00:17:52.528 "data_offset": 0, 00:17:52.528 "data_size": 65536 00:17:52.528 }, 00:17:52.528 { 00:17:52.528 "name": null, 00:17:52.528 "uuid": "d3f4f801-5005-4828-8496-9734e8ff2ca6", 00:17:52.528 "is_configured": false, 00:17:52.528 "data_offset": 0, 00:17:52.528 "data_size": 65536 00:17:52.528 }, 00:17:52.528 { 00:17:52.528 "name": "BaseBdev3", 00:17:52.528 "uuid": "61836e04-8c58-4200-aabd-de598c6ef709", 00:17:52.528 "is_configured": true, 00:17:52.528 "data_offset": 0, 00:17:52.528 "data_size": 65536 00:17:52.528 }, 00:17:52.528 { 00:17:52.528 "name": "BaseBdev4", 00:17:52.528 "uuid": "e4e64a6e-84f9-426c-9512-83d281d1527c", 00:17:52.528 "is_configured": true, 00:17:52.528 "data_offset": 0, 00:17:52.528 "data_size": 65536 00:17:52.528 } 00:17:52.528 ] 00:17:52.528 }' 00:17:52.528 04:08:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.528 04:08:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.095 [2024-12-06 04:08:46.236653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.095 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.096 "name": "Existed_Raid", 00:17:53.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.096 "strip_size_kb": 64, 00:17:53.096 "state": "configuring", 00:17:53.096 "raid_level": "raid5f", 00:17:53.096 "superblock": false, 00:17:53.096 "num_base_bdevs": 4, 00:17:53.096 "num_base_bdevs_discovered": 3, 00:17:53.096 "num_base_bdevs_operational": 4, 00:17:53.096 "base_bdevs_list": [ 00:17:53.096 { 00:17:53.096 "name": null, 00:17:53.096 "uuid": "2c6ca2f1-732a-4086-ab39-f0c9b76a85e3", 00:17:53.096 "is_configured": false, 00:17:53.096 "data_offset": 0, 00:17:53.096 "data_size": 65536 00:17:53.096 }, 00:17:53.096 { 00:17:53.096 "name": "BaseBdev2", 00:17:53.096 "uuid": "d3f4f801-5005-4828-8496-9734e8ff2ca6", 00:17:53.096 "is_configured": true, 00:17:53.096 "data_offset": 0, 00:17:53.096 "data_size": 65536 00:17:53.096 }, 00:17:53.096 { 00:17:53.096 "name": "BaseBdev3", 00:17:53.096 "uuid": "61836e04-8c58-4200-aabd-de598c6ef709", 00:17:53.096 "is_configured": true, 00:17:53.096 "data_offset": 0, 00:17:53.096 "data_size": 65536 00:17:53.096 }, 00:17:53.096 { 00:17:53.096 "name": "BaseBdev4", 00:17:53.096 "uuid": "e4e64a6e-84f9-426c-9512-83d281d1527c", 00:17:53.096 "is_configured": true, 00:17:53.096 "data_offset": 0, 00:17:53.096 "data_size": 65536 00:17:53.096 } 00:17:53.096 ] 00:17:53.096 }' 00:17:53.096 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.096 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.354 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:53.354 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.354 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.354 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.354 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.354 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:53.613 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.613 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:53.613 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.613 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.613 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.613 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2c6ca2f1-732a-4086-ab39-f0c9b76a85e3 00:17:53.613 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.613 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.613 [2024-12-06 04:08:46.797731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:53.613 [2024-12-06 04:08:46.797800] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:53.613 [2024-12-06 04:08:46.797809] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:53.613 [2024-12-06 04:08:46.798093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:53.613 [2024-12-06 04:08:46.805377] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:53.613 [2024-12-06 04:08:46.805422] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:53.613 [2024-12-06 04:08:46.805690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.613 NewBaseBdev 00:17:53.613 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.613 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:53.613 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:53.613 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:53.613 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:53.613 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:53.613 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:53.613 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:53.613 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.613 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.613 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.613 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:53.613 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.613 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.613 [ 00:17:53.613 { 00:17:53.613 "name": "NewBaseBdev", 00:17:53.613 "aliases": [ 00:17:53.613 "2c6ca2f1-732a-4086-ab39-f0c9b76a85e3" 00:17:53.613 ], 00:17:53.613 "product_name": "Malloc disk", 00:17:53.613 "block_size": 512, 00:17:53.613 "num_blocks": 65536, 00:17:53.613 "uuid": "2c6ca2f1-732a-4086-ab39-f0c9b76a85e3", 00:17:53.613 "assigned_rate_limits": { 00:17:53.613 "rw_ios_per_sec": 0, 00:17:53.613 "rw_mbytes_per_sec": 0, 00:17:53.613 "r_mbytes_per_sec": 0, 00:17:53.613 "w_mbytes_per_sec": 0 00:17:53.613 }, 00:17:53.613 "claimed": true, 00:17:53.613 "claim_type": "exclusive_write", 00:17:53.613 "zoned": false, 00:17:53.613 "supported_io_types": { 00:17:53.613 "read": true, 00:17:53.613 "write": true, 00:17:53.613 "unmap": true, 00:17:53.613 "flush": true, 00:17:53.613 "reset": true, 00:17:53.613 "nvme_admin": false, 00:17:53.613 "nvme_io": false, 00:17:53.613 "nvme_io_md": false, 00:17:53.613 "write_zeroes": true, 00:17:53.613 "zcopy": true, 00:17:53.613 "get_zone_info": false, 00:17:53.613 "zone_management": false, 00:17:53.613 "zone_append": false, 00:17:53.613 "compare": false, 00:17:53.613 "compare_and_write": false, 00:17:53.613 "abort": true, 00:17:53.613 "seek_hole": false, 00:17:53.613 "seek_data": false, 00:17:53.613 "copy": true, 00:17:53.613 "nvme_iov_md": false 00:17:53.614 }, 00:17:53.614 "memory_domains": [ 00:17:53.614 { 00:17:53.614 "dma_device_id": "system", 00:17:53.614 "dma_device_type": 1 00:17:53.614 }, 00:17:53.614 { 00:17:53.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.614 "dma_device_type": 2 00:17:53.614 } 00:17:53.614 ], 00:17:53.614 "driver_specific": {} 00:17:53.614 } 00:17:53.614 ] 00:17:53.614 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.614 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:53.614 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:53.614 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.614 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.614 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:53.614 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.614 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:53.614 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.614 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.614 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.614 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.614 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.614 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.614 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.614 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.614 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.614 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.614 "name": "Existed_Raid", 00:17:53.614 "uuid": "31f5634a-d10a-4f43-bc72-87e7d38c1f5a", 00:17:53.614 "strip_size_kb": 64, 00:17:53.614 "state": "online", 00:17:53.614 "raid_level": "raid5f", 00:17:53.614 "superblock": false, 00:17:53.614 "num_base_bdevs": 4, 00:17:53.614 "num_base_bdevs_discovered": 4, 00:17:53.614 "num_base_bdevs_operational": 4, 00:17:53.614 "base_bdevs_list": [ 00:17:53.614 { 00:17:53.614 "name": "NewBaseBdev", 00:17:53.614 "uuid": "2c6ca2f1-732a-4086-ab39-f0c9b76a85e3", 00:17:53.614 "is_configured": true, 00:17:53.614 "data_offset": 0, 00:17:53.614 "data_size": 65536 00:17:53.614 }, 00:17:53.614 { 00:17:53.614 "name": "BaseBdev2", 00:17:53.614 "uuid": "d3f4f801-5005-4828-8496-9734e8ff2ca6", 00:17:53.614 "is_configured": true, 00:17:53.614 "data_offset": 0, 00:17:53.614 "data_size": 65536 00:17:53.614 }, 00:17:53.614 { 00:17:53.614 "name": "BaseBdev3", 00:17:53.614 "uuid": "61836e04-8c58-4200-aabd-de598c6ef709", 00:17:53.614 "is_configured": true, 00:17:53.614 "data_offset": 0, 00:17:53.614 "data_size": 65536 00:17:53.614 }, 00:17:53.614 { 00:17:53.614 "name": "BaseBdev4", 00:17:53.614 "uuid": "e4e64a6e-84f9-426c-9512-83d281d1527c", 00:17:53.614 "is_configured": true, 00:17:53.614 "data_offset": 0, 00:17:53.614 "data_size": 65536 00:17:53.614 } 00:17:53.614 ] 00:17:53.614 }' 00:17:53.614 04:08:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.614 04:08:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.892 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:53.892 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:53.892 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:53.892 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:53.892 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:53.892 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:53.892 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:53.892 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.892 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.892 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:54.151 [2024-12-06 04:08:47.249871] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.151 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.151 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:54.151 "name": "Existed_Raid", 00:17:54.151 "aliases": [ 00:17:54.151 "31f5634a-d10a-4f43-bc72-87e7d38c1f5a" 00:17:54.151 ], 00:17:54.151 "product_name": "Raid Volume", 00:17:54.151 "block_size": 512, 00:17:54.151 "num_blocks": 196608, 00:17:54.151 "uuid": "31f5634a-d10a-4f43-bc72-87e7d38c1f5a", 00:17:54.151 "assigned_rate_limits": { 00:17:54.151 "rw_ios_per_sec": 0, 00:17:54.151 "rw_mbytes_per_sec": 0, 00:17:54.151 "r_mbytes_per_sec": 0, 00:17:54.151 "w_mbytes_per_sec": 0 00:17:54.151 }, 00:17:54.151 "claimed": false, 00:17:54.151 "zoned": false, 00:17:54.151 "supported_io_types": { 00:17:54.151 "read": true, 00:17:54.151 "write": true, 00:17:54.151 "unmap": false, 00:17:54.151 "flush": false, 00:17:54.151 "reset": true, 00:17:54.151 "nvme_admin": false, 00:17:54.151 "nvme_io": false, 00:17:54.151 "nvme_io_md": false, 00:17:54.151 "write_zeroes": true, 00:17:54.151 "zcopy": false, 00:17:54.151 "get_zone_info": false, 00:17:54.151 "zone_management": false, 00:17:54.151 "zone_append": false, 00:17:54.151 "compare": false, 00:17:54.151 "compare_and_write": false, 00:17:54.151 "abort": false, 00:17:54.151 "seek_hole": false, 00:17:54.151 "seek_data": false, 00:17:54.151 "copy": false, 00:17:54.151 "nvme_iov_md": false 00:17:54.151 }, 00:17:54.151 "driver_specific": { 00:17:54.151 "raid": { 00:17:54.151 "uuid": "31f5634a-d10a-4f43-bc72-87e7d38c1f5a", 00:17:54.151 "strip_size_kb": 64, 00:17:54.151 "state": "online", 00:17:54.152 "raid_level": "raid5f", 00:17:54.152 "superblock": false, 00:17:54.152 "num_base_bdevs": 4, 00:17:54.152 "num_base_bdevs_discovered": 4, 00:17:54.152 "num_base_bdevs_operational": 4, 00:17:54.152 "base_bdevs_list": [ 00:17:54.152 { 00:17:54.152 "name": "NewBaseBdev", 00:17:54.152 "uuid": "2c6ca2f1-732a-4086-ab39-f0c9b76a85e3", 00:17:54.152 "is_configured": true, 00:17:54.152 "data_offset": 0, 00:17:54.152 "data_size": 65536 00:17:54.152 }, 00:17:54.152 { 00:17:54.152 "name": "BaseBdev2", 00:17:54.152 "uuid": "d3f4f801-5005-4828-8496-9734e8ff2ca6", 00:17:54.152 "is_configured": true, 00:17:54.152 "data_offset": 0, 00:17:54.152 "data_size": 65536 00:17:54.152 }, 00:17:54.152 { 00:17:54.152 "name": "BaseBdev3", 00:17:54.152 "uuid": "61836e04-8c58-4200-aabd-de598c6ef709", 00:17:54.152 "is_configured": true, 00:17:54.152 "data_offset": 0, 00:17:54.152 "data_size": 65536 00:17:54.152 }, 00:17:54.152 { 00:17:54.152 "name": "BaseBdev4", 00:17:54.152 "uuid": "e4e64a6e-84f9-426c-9512-83d281d1527c", 00:17:54.152 "is_configured": true, 00:17:54.152 "data_offset": 0, 00:17:54.152 "data_size": 65536 00:17:54.152 } 00:17:54.152 ] 00:17:54.152 } 00:17:54.152 } 00:17:54.152 }' 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:54.152 BaseBdev2 00:17:54.152 BaseBdev3 00:17:54.152 BaseBdev4' 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.152 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.412 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.412 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:54.412 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:54.412 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:54.412 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.412 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.412 [2024-12-06 04:08:47.545135] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:54.412 [2024-12-06 04:08:47.545169] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:54.412 [2024-12-06 04:08:47.545247] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:54.412 [2024-12-06 04:08:47.545606] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:54.412 [2024-12-06 04:08:47.545629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:54.412 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.412 04:08:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83047 00:17:54.412 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83047 ']' 00:17:54.412 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83047 00:17:54.412 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:17:54.412 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.412 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83047 00:17:54.412 killing process with pid 83047 00:17:54.412 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:54.412 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:54.412 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83047' 00:17:54.412 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83047 00:17:54.412 [2024-12-06 04:08:47.593273] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:54.412 04:08:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83047 00:17:54.672 [2024-12-06 04:08:47.992406] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:56.051 00:17:56.051 real 0m11.951s 00:17:56.051 user 0m18.912s 00:17:56.051 sys 0m2.222s 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.051 ************************************ 00:17:56.051 END TEST raid5f_state_function_test 00:17:56.051 ************************************ 00:17:56.051 04:08:49 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:17:56.051 04:08:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:56.051 04:08:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:56.051 04:08:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:56.051 ************************************ 00:17:56.051 START TEST raid5f_state_function_test_sb 00:17:56.051 ************************************ 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83724 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83724' 00:17:56.051 Process raid pid: 83724 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83724 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83724 ']' 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.051 04:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.052 04:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.052 04:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.052 04:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.052 [2024-12-06 04:08:49.301257] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:17:56.052 [2024-12-06 04:08:49.301401] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.310 [2024-12-06 04:08:49.474179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.310 [2024-12-06 04:08:49.594098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.568 [2024-12-06 04:08:49.799543] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.568 [2024-12-06 04:08:49.799589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.851 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.851 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:56.851 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:56.851 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.851 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.851 [2024-12-06 04:08:50.157443] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:56.851 [2024-12-06 04:08:50.157504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:56.851 [2024-12-06 04:08:50.157520] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:56.851 [2024-12-06 04:08:50.157543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:56.851 [2024-12-06 04:08:50.157549] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:56.851 [2024-12-06 04:08:50.157558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:56.851 [2024-12-06 04:08:50.157567] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:56.851 [2024-12-06 04:08:50.157576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:56.851 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.851 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:56.851 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.851 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:56.851 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.851 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.851 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:56.851 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.851 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.851 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.851 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.851 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.851 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.852 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.852 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.852 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.129 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.129 "name": "Existed_Raid", 00:17:57.129 "uuid": "68cca193-f694-4b57-9059-97761a977c98", 00:17:57.129 "strip_size_kb": 64, 00:17:57.129 "state": "configuring", 00:17:57.129 "raid_level": "raid5f", 00:17:57.129 "superblock": true, 00:17:57.129 "num_base_bdevs": 4, 00:17:57.129 "num_base_bdevs_discovered": 0, 00:17:57.129 "num_base_bdevs_operational": 4, 00:17:57.129 "base_bdevs_list": [ 00:17:57.129 { 00:17:57.129 "name": "BaseBdev1", 00:17:57.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.129 "is_configured": false, 00:17:57.129 "data_offset": 0, 00:17:57.129 "data_size": 0 00:17:57.129 }, 00:17:57.129 { 00:17:57.129 "name": "BaseBdev2", 00:17:57.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.129 "is_configured": false, 00:17:57.129 "data_offset": 0, 00:17:57.129 "data_size": 0 00:17:57.129 }, 00:17:57.129 { 00:17:57.129 "name": "BaseBdev3", 00:17:57.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.129 "is_configured": false, 00:17:57.129 "data_offset": 0, 00:17:57.129 "data_size": 0 00:17:57.129 }, 00:17:57.129 { 00:17:57.129 "name": "BaseBdev4", 00:17:57.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.129 "is_configured": false, 00:17:57.129 "data_offset": 0, 00:17:57.129 "data_size": 0 00:17:57.129 } 00:17:57.129 ] 00:17:57.129 }' 00:17:57.129 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.129 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.387 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.388 [2024-12-06 04:08:50.604680] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:57.388 [2024-12-06 04:08:50.604728] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.388 [2024-12-06 04:08:50.616630] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:57.388 [2024-12-06 04:08:50.616676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:57.388 [2024-12-06 04:08:50.616687] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:57.388 [2024-12-06 04:08:50.616696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:57.388 [2024-12-06 04:08:50.616702] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:57.388 [2024-12-06 04:08:50.616711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:57.388 [2024-12-06 04:08:50.616718] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:57.388 [2024-12-06 04:08:50.616726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.388 [2024-12-06 04:08:50.666576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:57.388 BaseBdev1 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.388 [ 00:17:57.388 { 00:17:57.388 "name": "BaseBdev1", 00:17:57.388 "aliases": [ 00:17:57.388 "aa4d4854-d918-4836-ac47-215664c3786a" 00:17:57.388 ], 00:17:57.388 "product_name": "Malloc disk", 00:17:57.388 "block_size": 512, 00:17:57.388 "num_blocks": 65536, 00:17:57.388 "uuid": "aa4d4854-d918-4836-ac47-215664c3786a", 00:17:57.388 "assigned_rate_limits": { 00:17:57.388 "rw_ios_per_sec": 0, 00:17:57.388 "rw_mbytes_per_sec": 0, 00:17:57.388 "r_mbytes_per_sec": 0, 00:17:57.388 "w_mbytes_per_sec": 0 00:17:57.388 }, 00:17:57.388 "claimed": true, 00:17:57.388 "claim_type": "exclusive_write", 00:17:57.388 "zoned": false, 00:17:57.388 "supported_io_types": { 00:17:57.388 "read": true, 00:17:57.388 "write": true, 00:17:57.388 "unmap": true, 00:17:57.388 "flush": true, 00:17:57.388 "reset": true, 00:17:57.388 "nvme_admin": false, 00:17:57.388 "nvme_io": false, 00:17:57.388 "nvme_io_md": false, 00:17:57.388 "write_zeroes": true, 00:17:57.388 "zcopy": true, 00:17:57.388 "get_zone_info": false, 00:17:57.388 "zone_management": false, 00:17:57.388 "zone_append": false, 00:17:57.388 "compare": false, 00:17:57.388 "compare_and_write": false, 00:17:57.388 "abort": true, 00:17:57.388 "seek_hole": false, 00:17:57.388 "seek_data": false, 00:17:57.388 "copy": true, 00:17:57.388 "nvme_iov_md": false 00:17:57.388 }, 00:17:57.388 "memory_domains": [ 00:17:57.388 { 00:17:57.388 "dma_device_id": "system", 00:17:57.388 "dma_device_type": 1 00:17:57.388 }, 00:17:57.388 { 00:17:57.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.388 "dma_device_type": 2 00:17:57.388 } 00:17:57.388 ], 00:17:57.388 "driver_specific": {} 00:17:57.388 } 00:17:57.388 ] 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.388 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.647 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.647 "name": "Existed_Raid", 00:17:57.647 "uuid": "864d5284-ffdb-4444-92ed-3b8ac0864ba6", 00:17:57.647 "strip_size_kb": 64, 00:17:57.647 "state": "configuring", 00:17:57.647 "raid_level": "raid5f", 00:17:57.647 "superblock": true, 00:17:57.647 "num_base_bdevs": 4, 00:17:57.647 "num_base_bdevs_discovered": 1, 00:17:57.647 "num_base_bdevs_operational": 4, 00:17:57.647 "base_bdevs_list": [ 00:17:57.647 { 00:17:57.647 "name": "BaseBdev1", 00:17:57.647 "uuid": "aa4d4854-d918-4836-ac47-215664c3786a", 00:17:57.647 "is_configured": true, 00:17:57.647 "data_offset": 2048, 00:17:57.647 "data_size": 63488 00:17:57.647 }, 00:17:57.647 { 00:17:57.647 "name": "BaseBdev2", 00:17:57.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.647 "is_configured": false, 00:17:57.647 "data_offset": 0, 00:17:57.647 "data_size": 0 00:17:57.647 }, 00:17:57.647 { 00:17:57.647 "name": "BaseBdev3", 00:17:57.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.647 "is_configured": false, 00:17:57.647 "data_offset": 0, 00:17:57.647 "data_size": 0 00:17:57.647 }, 00:17:57.647 { 00:17:57.647 "name": "BaseBdev4", 00:17:57.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.647 "is_configured": false, 00:17:57.647 "data_offset": 0, 00:17:57.647 "data_size": 0 00:17:57.647 } 00:17:57.647 ] 00:17:57.647 }' 00:17:57.647 04:08:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.647 04:08:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.906 [2024-12-06 04:08:51.145858] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:57.906 [2024-12-06 04:08:51.145917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.906 [2024-12-06 04:08:51.157926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:57.906 [2024-12-06 04:08:51.160013] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:57.906 [2024-12-06 04:08:51.160072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:57.906 [2024-12-06 04:08:51.160086] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:57.906 [2024-12-06 04:08:51.160100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:57.906 [2024-12-06 04:08:51.160108] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:57.906 [2024-12-06 04:08:51.160117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.906 "name": "Existed_Raid", 00:17:57.906 "uuid": "22915022-aafc-477e-a3ef-74cdede5652b", 00:17:57.906 "strip_size_kb": 64, 00:17:57.906 "state": "configuring", 00:17:57.906 "raid_level": "raid5f", 00:17:57.906 "superblock": true, 00:17:57.906 "num_base_bdevs": 4, 00:17:57.906 "num_base_bdevs_discovered": 1, 00:17:57.906 "num_base_bdevs_operational": 4, 00:17:57.906 "base_bdevs_list": [ 00:17:57.906 { 00:17:57.906 "name": "BaseBdev1", 00:17:57.906 "uuid": "aa4d4854-d918-4836-ac47-215664c3786a", 00:17:57.906 "is_configured": true, 00:17:57.906 "data_offset": 2048, 00:17:57.906 "data_size": 63488 00:17:57.906 }, 00:17:57.906 { 00:17:57.906 "name": "BaseBdev2", 00:17:57.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.906 "is_configured": false, 00:17:57.906 "data_offset": 0, 00:17:57.906 "data_size": 0 00:17:57.906 }, 00:17:57.906 { 00:17:57.906 "name": "BaseBdev3", 00:17:57.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.906 "is_configured": false, 00:17:57.906 "data_offset": 0, 00:17:57.906 "data_size": 0 00:17:57.906 }, 00:17:57.906 { 00:17:57.906 "name": "BaseBdev4", 00:17:57.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.906 "is_configured": false, 00:17:57.906 "data_offset": 0, 00:17:57.906 "data_size": 0 00:17:57.906 } 00:17:57.906 ] 00:17:57.906 }' 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.906 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.475 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:58.475 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.475 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.475 [2024-12-06 04:08:51.636662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:58.475 BaseBdev2 00:17:58.475 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.475 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:58.475 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:58.475 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:58.475 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:58.475 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:58.475 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:58.475 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:58.475 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.475 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.475 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.475 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:58.475 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.475 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.475 [ 00:17:58.475 { 00:17:58.475 "name": "BaseBdev2", 00:17:58.475 "aliases": [ 00:17:58.475 "bbc473ba-556e-4bf0-af49-292c0d60222f" 00:17:58.475 ], 00:17:58.475 "product_name": "Malloc disk", 00:17:58.475 "block_size": 512, 00:17:58.475 "num_blocks": 65536, 00:17:58.476 "uuid": "bbc473ba-556e-4bf0-af49-292c0d60222f", 00:17:58.476 "assigned_rate_limits": { 00:17:58.476 "rw_ios_per_sec": 0, 00:17:58.476 "rw_mbytes_per_sec": 0, 00:17:58.476 "r_mbytes_per_sec": 0, 00:17:58.476 "w_mbytes_per_sec": 0 00:17:58.476 }, 00:17:58.476 "claimed": true, 00:17:58.476 "claim_type": "exclusive_write", 00:17:58.476 "zoned": false, 00:17:58.476 "supported_io_types": { 00:17:58.476 "read": true, 00:17:58.476 "write": true, 00:17:58.476 "unmap": true, 00:17:58.476 "flush": true, 00:17:58.476 "reset": true, 00:17:58.476 "nvme_admin": false, 00:17:58.476 "nvme_io": false, 00:17:58.476 "nvme_io_md": false, 00:17:58.476 "write_zeroes": true, 00:17:58.476 "zcopy": true, 00:17:58.476 "get_zone_info": false, 00:17:58.476 "zone_management": false, 00:17:58.476 "zone_append": false, 00:17:58.476 "compare": false, 00:17:58.476 "compare_and_write": false, 00:17:58.476 "abort": true, 00:17:58.476 "seek_hole": false, 00:17:58.476 "seek_data": false, 00:17:58.476 "copy": true, 00:17:58.476 "nvme_iov_md": false 00:17:58.476 }, 00:17:58.476 "memory_domains": [ 00:17:58.476 { 00:17:58.476 "dma_device_id": "system", 00:17:58.476 "dma_device_type": 1 00:17:58.476 }, 00:17:58.476 { 00:17:58.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.476 "dma_device_type": 2 00:17:58.476 } 00:17:58.476 ], 00:17:58.476 "driver_specific": {} 00:17:58.476 } 00:17:58.476 ] 00:17:58.476 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.476 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:58.476 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:58.476 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:58.476 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:58.476 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:58.476 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:58.476 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:58.476 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.476 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:58.476 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.476 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.476 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.476 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.476 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.476 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.476 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.476 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.476 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.476 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.476 "name": "Existed_Raid", 00:17:58.476 "uuid": "22915022-aafc-477e-a3ef-74cdede5652b", 00:17:58.476 "strip_size_kb": 64, 00:17:58.476 "state": "configuring", 00:17:58.476 "raid_level": "raid5f", 00:17:58.476 "superblock": true, 00:17:58.476 "num_base_bdevs": 4, 00:17:58.476 "num_base_bdevs_discovered": 2, 00:17:58.476 "num_base_bdevs_operational": 4, 00:17:58.476 "base_bdevs_list": [ 00:17:58.476 { 00:17:58.476 "name": "BaseBdev1", 00:17:58.476 "uuid": "aa4d4854-d918-4836-ac47-215664c3786a", 00:17:58.476 "is_configured": true, 00:17:58.476 "data_offset": 2048, 00:17:58.476 "data_size": 63488 00:17:58.476 }, 00:17:58.476 { 00:17:58.476 "name": "BaseBdev2", 00:17:58.476 "uuid": "bbc473ba-556e-4bf0-af49-292c0d60222f", 00:17:58.476 "is_configured": true, 00:17:58.476 "data_offset": 2048, 00:17:58.476 "data_size": 63488 00:17:58.476 }, 00:17:58.476 { 00:17:58.476 "name": "BaseBdev3", 00:17:58.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.476 "is_configured": false, 00:17:58.476 "data_offset": 0, 00:17:58.476 "data_size": 0 00:17:58.476 }, 00:17:58.476 { 00:17:58.476 "name": "BaseBdev4", 00:17:58.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.476 "is_configured": false, 00:17:58.476 "data_offset": 0, 00:17:58.476 "data_size": 0 00:17:58.476 } 00:17:58.476 ] 00:17:58.476 }' 00:17:58.476 04:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.476 04:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.735 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:58.735 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.735 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.994 [2024-12-06 04:08:52.140274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:58.994 BaseBdev3 00:17:58.994 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.994 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:58.994 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:58.994 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:58.994 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:58.994 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:58.994 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:58.994 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:58.994 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.994 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.994 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.994 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:58.994 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.994 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.994 [ 00:17:58.994 { 00:17:58.994 "name": "BaseBdev3", 00:17:58.994 "aliases": [ 00:17:58.994 "3e354c70-eb4f-400f-a048-fe888320144c" 00:17:58.994 ], 00:17:58.994 "product_name": "Malloc disk", 00:17:58.994 "block_size": 512, 00:17:58.994 "num_blocks": 65536, 00:17:58.994 "uuid": "3e354c70-eb4f-400f-a048-fe888320144c", 00:17:58.994 "assigned_rate_limits": { 00:17:58.994 "rw_ios_per_sec": 0, 00:17:58.994 "rw_mbytes_per_sec": 0, 00:17:58.994 "r_mbytes_per_sec": 0, 00:17:58.994 "w_mbytes_per_sec": 0 00:17:58.994 }, 00:17:58.994 "claimed": true, 00:17:58.994 "claim_type": "exclusive_write", 00:17:58.994 "zoned": false, 00:17:58.994 "supported_io_types": { 00:17:58.994 "read": true, 00:17:58.994 "write": true, 00:17:58.994 "unmap": true, 00:17:58.994 "flush": true, 00:17:58.994 "reset": true, 00:17:58.994 "nvme_admin": false, 00:17:58.994 "nvme_io": false, 00:17:58.994 "nvme_io_md": false, 00:17:58.994 "write_zeroes": true, 00:17:58.994 "zcopy": true, 00:17:58.994 "get_zone_info": false, 00:17:58.995 "zone_management": false, 00:17:58.995 "zone_append": false, 00:17:58.995 "compare": false, 00:17:58.995 "compare_and_write": false, 00:17:58.995 "abort": true, 00:17:58.995 "seek_hole": false, 00:17:58.995 "seek_data": false, 00:17:58.995 "copy": true, 00:17:58.995 "nvme_iov_md": false 00:17:58.995 }, 00:17:58.995 "memory_domains": [ 00:17:58.995 { 00:17:58.995 "dma_device_id": "system", 00:17:58.995 "dma_device_type": 1 00:17:58.995 }, 00:17:58.995 { 00:17:58.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.995 "dma_device_type": 2 00:17:58.995 } 00:17:58.995 ], 00:17:58.995 "driver_specific": {} 00:17:58.995 } 00:17:58.995 ] 00:17:58.995 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.995 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:58.995 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:58.995 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:58.995 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:58.995 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:58.995 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:58.995 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:58.995 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.995 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:58.995 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.995 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.995 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.995 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.995 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.995 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.995 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.995 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.995 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.995 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.995 "name": "Existed_Raid", 00:17:58.995 "uuid": "22915022-aafc-477e-a3ef-74cdede5652b", 00:17:58.995 "strip_size_kb": 64, 00:17:58.995 "state": "configuring", 00:17:58.995 "raid_level": "raid5f", 00:17:58.995 "superblock": true, 00:17:58.995 "num_base_bdevs": 4, 00:17:58.995 "num_base_bdevs_discovered": 3, 00:17:58.995 "num_base_bdevs_operational": 4, 00:17:58.995 "base_bdevs_list": [ 00:17:58.995 { 00:17:58.995 "name": "BaseBdev1", 00:17:58.995 "uuid": "aa4d4854-d918-4836-ac47-215664c3786a", 00:17:58.995 "is_configured": true, 00:17:58.995 "data_offset": 2048, 00:17:58.995 "data_size": 63488 00:17:58.995 }, 00:17:58.995 { 00:17:58.995 "name": "BaseBdev2", 00:17:58.995 "uuid": "bbc473ba-556e-4bf0-af49-292c0d60222f", 00:17:58.995 "is_configured": true, 00:17:58.995 "data_offset": 2048, 00:17:58.995 "data_size": 63488 00:17:58.995 }, 00:17:58.995 { 00:17:58.995 "name": "BaseBdev3", 00:17:58.995 "uuid": "3e354c70-eb4f-400f-a048-fe888320144c", 00:17:58.995 "is_configured": true, 00:17:58.995 "data_offset": 2048, 00:17:58.995 "data_size": 63488 00:17:58.995 }, 00:17:58.995 { 00:17:58.995 "name": "BaseBdev4", 00:17:58.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.995 "is_configured": false, 00:17:58.995 "data_offset": 0, 00:17:58.995 "data_size": 0 00:17:58.995 } 00:17:58.995 ] 00:17:58.995 }' 00:17:58.995 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.995 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.254 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:59.254 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.254 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.516 [2024-12-06 04:08:52.635237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:59.516 [2024-12-06 04:08:52.635553] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:59.516 [2024-12-06 04:08:52.635570] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:59.516 [2024-12-06 04:08:52.635836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:59.516 BaseBdev4 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.516 [2024-12-06 04:08:52.643831] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:59.516 [2024-12-06 04:08:52.643906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:59.516 [2024-12-06 04:08:52.644225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.516 [ 00:17:59.516 { 00:17:59.516 "name": "BaseBdev4", 00:17:59.516 "aliases": [ 00:17:59.516 "bd644412-8a6d-4857-97e8-96194d3bec20" 00:17:59.516 ], 00:17:59.516 "product_name": "Malloc disk", 00:17:59.516 "block_size": 512, 00:17:59.516 "num_blocks": 65536, 00:17:59.516 "uuid": "bd644412-8a6d-4857-97e8-96194d3bec20", 00:17:59.516 "assigned_rate_limits": { 00:17:59.516 "rw_ios_per_sec": 0, 00:17:59.516 "rw_mbytes_per_sec": 0, 00:17:59.516 "r_mbytes_per_sec": 0, 00:17:59.516 "w_mbytes_per_sec": 0 00:17:59.516 }, 00:17:59.516 "claimed": true, 00:17:59.516 "claim_type": "exclusive_write", 00:17:59.516 "zoned": false, 00:17:59.516 "supported_io_types": { 00:17:59.516 "read": true, 00:17:59.516 "write": true, 00:17:59.516 "unmap": true, 00:17:59.516 "flush": true, 00:17:59.516 "reset": true, 00:17:59.516 "nvme_admin": false, 00:17:59.516 "nvme_io": false, 00:17:59.516 "nvme_io_md": false, 00:17:59.516 "write_zeroes": true, 00:17:59.516 "zcopy": true, 00:17:59.516 "get_zone_info": false, 00:17:59.516 "zone_management": false, 00:17:59.516 "zone_append": false, 00:17:59.516 "compare": false, 00:17:59.516 "compare_and_write": false, 00:17:59.516 "abort": true, 00:17:59.516 "seek_hole": false, 00:17:59.516 "seek_data": false, 00:17:59.516 "copy": true, 00:17:59.516 "nvme_iov_md": false 00:17:59.516 }, 00:17:59.516 "memory_domains": [ 00:17:59.516 { 00:17:59.516 "dma_device_id": "system", 00:17:59.516 "dma_device_type": 1 00:17:59.516 }, 00:17:59.516 { 00:17:59.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.516 "dma_device_type": 2 00:17:59.516 } 00:17:59.516 ], 00:17:59.516 "driver_specific": {} 00:17:59.516 } 00:17:59.516 ] 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.516 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.516 "name": "Existed_Raid", 00:17:59.516 "uuid": "22915022-aafc-477e-a3ef-74cdede5652b", 00:17:59.517 "strip_size_kb": 64, 00:17:59.517 "state": "online", 00:17:59.517 "raid_level": "raid5f", 00:17:59.517 "superblock": true, 00:17:59.517 "num_base_bdevs": 4, 00:17:59.517 "num_base_bdevs_discovered": 4, 00:17:59.517 "num_base_bdevs_operational": 4, 00:17:59.517 "base_bdevs_list": [ 00:17:59.517 { 00:17:59.517 "name": "BaseBdev1", 00:17:59.517 "uuid": "aa4d4854-d918-4836-ac47-215664c3786a", 00:17:59.517 "is_configured": true, 00:17:59.517 "data_offset": 2048, 00:17:59.517 "data_size": 63488 00:17:59.517 }, 00:17:59.517 { 00:17:59.517 "name": "BaseBdev2", 00:17:59.517 "uuid": "bbc473ba-556e-4bf0-af49-292c0d60222f", 00:17:59.517 "is_configured": true, 00:17:59.517 "data_offset": 2048, 00:17:59.517 "data_size": 63488 00:17:59.517 }, 00:17:59.517 { 00:17:59.517 "name": "BaseBdev3", 00:17:59.517 "uuid": "3e354c70-eb4f-400f-a048-fe888320144c", 00:17:59.517 "is_configured": true, 00:17:59.517 "data_offset": 2048, 00:17:59.517 "data_size": 63488 00:17:59.517 }, 00:17:59.517 { 00:17:59.517 "name": "BaseBdev4", 00:17:59.517 "uuid": "bd644412-8a6d-4857-97e8-96194d3bec20", 00:17:59.517 "is_configured": true, 00:17:59.517 "data_offset": 2048, 00:17:59.517 "data_size": 63488 00:17:59.517 } 00:17:59.517 ] 00:17:59.517 }' 00:17:59.517 04:08:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.517 04:08:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.777 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:59.777 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:59.777 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:59.777 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:59.777 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:59.777 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:59.777 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:59.777 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:59.777 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.777 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.777 [2024-12-06 04:08:53.096467] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.777 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.777 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:59.777 "name": "Existed_Raid", 00:17:59.777 "aliases": [ 00:17:59.777 "22915022-aafc-477e-a3ef-74cdede5652b" 00:17:59.777 ], 00:17:59.777 "product_name": "Raid Volume", 00:17:59.777 "block_size": 512, 00:17:59.777 "num_blocks": 190464, 00:17:59.777 "uuid": "22915022-aafc-477e-a3ef-74cdede5652b", 00:17:59.777 "assigned_rate_limits": { 00:17:59.777 "rw_ios_per_sec": 0, 00:17:59.777 "rw_mbytes_per_sec": 0, 00:17:59.777 "r_mbytes_per_sec": 0, 00:17:59.777 "w_mbytes_per_sec": 0 00:17:59.777 }, 00:17:59.777 "claimed": false, 00:17:59.777 "zoned": false, 00:17:59.777 "supported_io_types": { 00:17:59.777 "read": true, 00:17:59.777 "write": true, 00:17:59.777 "unmap": false, 00:17:59.777 "flush": false, 00:17:59.777 "reset": true, 00:17:59.777 "nvme_admin": false, 00:17:59.777 "nvme_io": false, 00:17:59.777 "nvme_io_md": false, 00:17:59.777 "write_zeroes": true, 00:17:59.777 "zcopy": false, 00:17:59.777 "get_zone_info": false, 00:17:59.777 "zone_management": false, 00:17:59.777 "zone_append": false, 00:17:59.777 "compare": false, 00:17:59.777 "compare_and_write": false, 00:17:59.777 "abort": false, 00:17:59.777 "seek_hole": false, 00:17:59.777 "seek_data": false, 00:17:59.777 "copy": false, 00:17:59.777 "nvme_iov_md": false 00:17:59.777 }, 00:17:59.777 "driver_specific": { 00:17:59.777 "raid": { 00:17:59.777 "uuid": "22915022-aafc-477e-a3ef-74cdede5652b", 00:17:59.777 "strip_size_kb": 64, 00:17:59.777 "state": "online", 00:17:59.777 "raid_level": "raid5f", 00:17:59.777 "superblock": true, 00:17:59.777 "num_base_bdevs": 4, 00:17:59.777 "num_base_bdevs_discovered": 4, 00:17:59.777 "num_base_bdevs_operational": 4, 00:17:59.777 "base_bdevs_list": [ 00:17:59.777 { 00:17:59.777 "name": "BaseBdev1", 00:17:59.777 "uuid": "aa4d4854-d918-4836-ac47-215664c3786a", 00:17:59.777 "is_configured": true, 00:17:59.777 "data_offset": 2048, 00:17:59.777 "data_size": 63488 00:17:59.777 }, 00:17:59.777 { 00:17:59.777 "name": "BaseBdev2", 00:17:59.777 "uuid": "bbc473ba-556e-4bf0-af49-292c0d60222f", 00:17:59.777 "is_configured": true, 00:17:59.777 "data_offset": 2048, 00:17:59.777 "data_size": 63488 00:17:59.777 }, 00:17:59.777 { 00:17:59.777 "name": "BaseBdev3", 00:17:59.777 "uuid": "3e354c70-eb4f-400f-a048-fe888320144c", 00:17:59.777 "is_configured": true, 00:17:59.777 "data_offset": 2048, 00:17:59.777 "data_size": 63488 00:17:59.777 }, 00:17:59.777 { 00:17:59.777 "name": "BaseBdev4", 00:17:59.777 "uuid": "bd644412-8a6d-4857-97e8-96194d3bec20", 00:17:59.777 "is_configured": true, 00:17:59.777 "data_offset": 2048, 00:17:59.777 "data_size": 63488 00:17:59.777 } 00:17:59.777 ] 00:17:59.777 } 00:17:59.777 } 00:17:59.777 }' 00:17:59.777 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:00.038 BaseBdev2 00:18:00.038 BaseBdev3 00:18:00.038 BaseBdev4' 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:00.038 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.297 [2024-12-06 04:08:53.399792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.297 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.297 "name": "Existed_Raid", 00:18:00.297 "uuid": "22915022-aafc-477e-a3ef-74cdede5652b", 00:18:00.297 "strip_size_kb": 64, 00:18:00.297 "state": "online", 00:18:00.297 "raid_level": "raid5f", 00:18:00.297 "superblock": true, 00:18:00.297 "num_base_bdevs": 4, 00:18:00.297 "num_base_bdevs_discovered": 3, 00:18:00.297 "num_base_bdevs_operational": 3, 00:18:00.297 "base_bdevs_list": [ 00:18:00.297 { 00:18:00.297 "name": null, 00:18:00.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.297 "is_configured": false, 00:18:00.297 "data_offset": 0, 00:18:00.297 "data_size": 63488 00:18:00.297 }, 00:18:00.297 { 00:18:00.297 "name": "BaseBdev2", 00:18:00.297 "uuid": "bbc473ba-556e-4bf0-af49-292c0d60222f", 00:18:00.297 "is_configured": true, 00:18:00.297 "data_offset": 2048, 00:18:00.297 "data_size": 63488 00:18:00.297 }, 00:18:00.297 { 00:18:00.297 "name": "BaseBdev3", 00:18:00.297 "uuid": "3e354c70-eb4f-400f-a048-fe888320144c", 00:18:00.298 "is_configured": true, 00:18:00.298 "data_offset": 2048, 00:18:00.298 "data_size": 63488 00:18:00.298 }, 00:18:00.298 { 00:18:00.298 "name": "BaseBdev4", 00:18:00.298 "uuid": "bd644412-8a6d-4857-97e8-96194d3bec20", 00:18:00.298 "is_configured": true, 00:18:00.298 "data_offset": 2048, 00:18:00.298 "data_size": 63488 00:18:00.298 } 00:18:00.298 ] 00:18:00.298 }' 00:18:00.298 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.298 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.556 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:00.556 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:00.556 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.556 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:00.556 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.556 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.814 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.814 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:00.814 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:00.814 04:08:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:00.814 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.814 04:08:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.814 [2024-12-06 04:08:53.952064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:00.814 [2024-12-06 04:08:53.952229] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:00.814 [2024-12-06 04:08:54.049060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.814 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.814 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:00.814 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:00.814 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.814 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.814 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.814 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:00.814 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.814 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:00.814 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:00.814 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:00.814 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.814 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.814 [2024-12-06 04:08:54.092986] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.072 [2024-12-06 04:08:54.232512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:01.072 [2024-12-06 04:08:54.232579] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.072 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.329 BaseBdev2 00:18:01.329 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.329 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:01.329 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:01.329 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:01.329 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:01.329 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:01.329 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:01.329 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:01.329 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.329 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.329 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.329 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:01.329 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.329 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.329 [ 00:18:01.329 { 00:18:01.329 "name": "BaseBdev2", 00:18:01.329 "aliases": [ 00:18:01.329 "09b1575b-151d-4e42-9bd1-157133a18f6b" 00:18:01.329 ], 00:18:01.329 "product_name": "Malloc disk", 00:18:01.329 "block_size": 512, 00:18:01.329 "num_blocks": 65536, 00:18:01.329 "uuid": "09b1575b-151d-4e42-9bd1-157133a18f6b", 00:18:01.329 "assigned_rate_limits": { 00:18:01.329 "rw_ios_per_sec": 0, 00:18:01.329 "rw_mbytes_per_sec": 0, 00:18:01.330 "r_mbytes_per_sec": 0, 00:18:01.330 "w_mbytes_per_sec": 0 00:18:01.330 }, 00:18:01.330 "claimed": false, 00:18:01.330 "zoned": false, 00:18:01.330 "supported_io_types": { 00:18:01.330 "read": true, 00:18:01.330 "write": true, 00:18:01.330 "unmap": true, 00:18:01.330 "flush": true, 00:18:01.330 "reset": true, 00:18:01.330 "nvme_admin": false, 00:18:01.330 "nvme_io": false, 00:18:01.330 "nvme_io_md": false, 00:18:01.330 "write_zeroes": true, 00:18:01.330 "zcopy": true, 00:18:01.330 "get_zone_info": false, 00:18:01.330 "zone_management": false, 00:18:01.330 "zone_append": false, 00:18:01.330 "compare": false, 00:18:01.330 "compare_and_write": false, 00:18:01.330 "abort": true, 00:18:01.330 "seek_hole": false, 00:18:01.330 "seek_data": false, 00:18:01.330 "copy": true, 00:18:01.330 "nvme_iov_md": false 00:18:01.330 }, 00:18:01.330 "memory_domains": [ 00:18:01.330 { 00:18:01.330 "dma_device_id": "system", 00:18:01.330 "dma_device_type": 1 00:18:01.330 }, 00:18:01.330 { 00:18:01.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.330 "dma_device_type": 2 00:18:01.330 } 00:18:01.330 ], 00:18:01.330 "driver_specific": {} 00:18:01.330 } 00:18:01.330 ] 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.330 BaseBdev3 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.330 [ 00:18:01.330 { 00:18:01.330 "name": "BaseBdev3", 00:18:01.330 "aliases": [ 00:18:01.330 "3fcbab8e-9a84-4fe9-aff7-934ac370f4e4" 00:18:01.330 ], 00:18:01.330 "product_name": "Malloc disk", 00:18:01.330 "block_size": 512, 00:18:01.330 "num_blocks": 65536, 00:18:01.330 "uuid": "3fcbab8e-9a84-4fe9-aff7-934ac370f4e4", 00:18:01.330 "assigned_rate_limits": { 00:18:01.330 "rw_ios_per_sec": 0, 00:18:01.330 "rw_mbytes_per_sec": 0, 00:18:01.330 "r_mbytes_per_sec": 0, 00:18:01.330 "w_mbytes_per_sec": 0 00:18:01.330 }, 00:18:01.330 "claimed": false, 00:18:01.330 "zoned": false, 00:18:01.330 "supported_io_types": { 00:18:01.330 "read": true, 00:18:01.330 "write": true, 00:18:01.330 "unmap": true, 00:18:01.330 "flush": true, 00:18:01.330 "reset": true, 00:18:01.330 "nvme_admin": false, 00:18:01.330 "nvme_io": false, 00:18:01.330 "nvme_io_md": false, 00:18:01.330 "write_zeroes": true, 00:18:01.330 "zcopy": true, 00:18:01.330 "get_zone_info": false, 00:18:01.330 "zone_management": false, 00:18:01.330 "zone_append": false, 00:18:01.330 "compare": false, 00:18:01.330 "compare_and_write": false, 00:18:01.330 "abort": true, 00:18:01.330 "seek_hole": false, 00:18:01.330 "seek_data": false, 00:18:01.330 "copy": true, 00:18:01.330 "nvme_iov_md": false 00:18:01.330 }, 00:18:01.330 "memory_domains": [ 00:18:01.330 { 00:18:01.330 "dma_device_id": "system", 00:18:01.330 "dma_device_type": 1 00:18:01.330 }, 00:18:01.330 { 00:18:01.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.330 "dma_device_type": 2 00:18:01.330 } 00:18:01.330 ], 00:18:01.330 "driver_specific": {} 00:18:01.330 } 00:18:01.330 ] 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.330 BaseBdev4 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.330 [ 00:18:01.330 { 00:18:01.330 "name": "BaseBdev4", 00:18:01.330 "aliases": [ 00:18:01.330 "825e695d-fed8-43ad-a48b-12fcbb9d0b7c" 00:18:01.330 ], 00:18:01.330 "product_name": "Malloc disk", 00:18:01.330 "block_size": 512, 00:18:01.330 "num_blocks": 65536, 00:18:01.330 "uuid": "825e695d-fed8-43ad-a48b-12fcbb9d0b7c", 00:18:01.330 "assigned_rate_limits": { 00:18:01.330 "rw_ios_per_sec": 0, 00:18:01.330 "rw_mbytes_per_sec": 0, 00:18:01.330 "r_mbytes_per_sec": 0, 00:18:01.330 "w_mbytes_per_sec": 0 00:18:01.330 }, 00:18:01.330 "claimed": false, 00:18:01.330 "zoned": false, 00:18:01.330 "supported_io_types": { 00:18:01.330 "read": true, 00:18:01.330 "write": true, 00:18:01.330 "unmap": true, 00:18:01.330 "flush": true, 00:18:01.330 "reset": true, 00:18:01.330 "nvme_admin": false, 00:18:01.330 "nvme_io": false, 00:18:01.330 "nvme_io_md": false, 00:18:01.330 "write_zeroes": true, 00:18:01.330 "zcopy": true, 00:18:01.330 "get_zone_info": false, 00:18:01.330 "zone_management": false, 00:18:01.330 "zone_append": false, 00:18:01.330 "compare": false, 00:18:01.330 "compare_and_write": false, 00:18:01.330 "abort": true, 00:18:01.330 "seek_hole": false, 00:18:01.330 "seek_data": false, 00:18:01.330 "copy": true, 00:18:01.330 "nvme_iov_md": false 00:18:01.330 }, 00:18:01.330 "memory_domains": [ 00:18:01.330 { 00:18:01.330 "dma_device_id": "system", 00:18:01.330 "dma_device_type": 1 00:18:01.330 }, 00:18:01.330 { 00:18:01.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.330 "dma_device_type": 2 00:18:01.330 } 00:18:01.330 ], 00:18:01.330 "driver_specific": {} 00:18:01.330 } 00:18:01.330 ] 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.330 [2024-12-06 04:08:54.629847] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:01.330 [2024-12-06 04:08:54.629895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:01.330 [2024-12-06 04:08:54.629936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:01.330 [2024-12-06 04:08:54.631940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:01.330 [2024-12-06 04:08:54.632002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.330 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.587 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.587 "name": "Existed_Raid", 00:18:01.587 "uuid": "fdaba11b-09d2-4140-b964-bf6d6192a147", 00:18:01.587 "strip_size_kb": 64, 00:18:01.587 "state": "configuring", 00:18:01.587 "raid_level": "raid5f", 00:18:01.587 "superblock": true, 00:18:01.587 "num_base_bdevs": 4, 00:18:01.587 "num_base_bdevs_discovered": 3, 00:18:01.587 "num_base_bdevs_operational": 4, 00:18:01.587 "base_bdevs_list": [ 00:18:01.587 { 00:18:01.587 "name": "BaseBdev1", 00:18:01.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.587 "is_configured": false, 00:18:01.587 "data_offset": 0, 00:18:01.587 "data_size": 0 00:18:01.587 }, 00:18:01.587 { 00:18:01.587 "name": "BaseBdev2", 00:18:01.587 "uuid": "09b1575b-151d-4e42-9bd1-157133a18f6b", 00:18:01.587 "is_configured": true, 00:18:01.587 "data_offset": 2048, 00:18:01.587 "data_size": 63488 00:18:01.587 }, 00:18:01.587 { 00:18:01.587 "name": "BaseBdev3", 00:18:01.587 "uuid": "3fcbab8e-9a84-4fe9-aff7-934ac370f4e4", 00:18:01.587 "is_configured": true, 00:18:01.587 "data_offset": 2048, 00:18:01.587 "data_size": 63488 00:18:01.587 }, 00:18:01.587 { 00:18:01.587 "name": "BaseBdev4", 00:18:01.587 "uuid": "825e695d-fed8-43ad-a48b-12fcbb9d0b7c", 00:18:01.587 "is_configured": true, 00:18:01.587 "data_offset": 2048, 00:18:01.587 "data_size": 63488 00:18:01.587 } 00:18:01.587 ] 00:18:01.587 }' 00:18:01.587 04:08:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.587 04:08:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.846 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:01.846 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.846 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.846 [2024-12-06 04:08:55.057120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:01.846 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.846 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:01.846 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:01.846 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.846 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:01.846 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.846 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:01.846 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.846 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.846 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.846 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.846 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.846 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.846 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.846 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.846 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.846 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.846 "name": "Existed_Raid", 00:18:01.846 "uuid": "fdaba11b-09d2-4140-b964-bf6d6192a147", 00:18:01.846 "strip_size_kb": 64, 00:18:01.846 "state": "configuring", 00:18:01.846 "raid_level": "raid5f", 00:18:01.846 "superblock": true, 00:18:01.846 "num_base_bdevs": 4, 00:18:01.846 "num_base_bdevs_discovered": 2, 00:18:01.846 "num_base_bdevs_operational": 4, 00:18:01.846 "base_bdevs_list": [ 00:18:01.846 { 00:18:01.846 "name": "BaseBdev1", 00:18:01.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.846 "is_configured": false, 00:18:01.846 "data_offset": 0, 00:18:01.846 "data_size": 0 00:18:01.846 }, 00:18:01.846 { 00:18:01.846 "name": null, 00:18:01.846 "uuid": "09b1575b-151d-4e42-9bd1-157133a18f6b", 00:18:01.846 "is_configured": false, 00:18:01.846 "data_offset": 0, 00:18:01.846 "data_size": 63488 00:18:01.846 }, 00:18:01.846 { 00:18:01.846 "name": "BaseBdev3", 00:18:01.846 "uuid": "3fcbab8e-9a84-4fe9-aff7-934ac370f4e4", 00:18:01.846 "is_configured": true, 00:18:01.846 "data_offset": 2048, 00:18:01.846 "data_size": 63488 00:18:01.846 }, 00:18:01.846 { 00:18:01.846 "name": "BaseBdev4", 00:18:01.846 "uuid": "825e695d-fed8-43ad-a48b-12fcbb9d0b7c", 00:18:01.846 "is_configured": true, 00:18:01.846 "data_offset": 2048, 00:18:01.846 "data_size": 63488 00:18:01.846 } 00:18:01.846 ] 00:18:01.846 }' 00:18:01.846 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.846 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.412 [2024-12-06 04:08:55.602100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:02.412 BaseBdev1 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.412 [ 00:18:02.412 { 00:18:02.412 "name": "BaseBdev1", 00:18:02.412 "aliases": [ 00:18:02.412 "c3e11a2a-bd51-47fc-ba45-259344a73d19" 00:18:02.412 ], 00:18:02.412 "product_name": "Malloc disk", 00:18:02.412 "block_size": 512, 00:18:02.412 "num_blocks": 65536, 00:18:02.412 "uuid": "c3e11a2a-bd51-47fc-ba45-259344a73d19", 00:18:02.412 "assigned_rate_limits": { 00:18:02.412 "rw_ios_per_sec": 0, 00:18:02.412 "rw_mbytes_per_sec": 0, 00:18:02.412 "r_mbytes_per_sec": 0, 00:18:02.412 "w_mbytes_per_sec": 0 00:18:02.412 }, 00:18:02.412 "claimed": true, 00:18:02.412 "claim_type": "exclusive_write", 00:18:02.412 "zoned": false, 00:18:02.412 "supported_io_types": { 00:18:02.412 "read": true, 00:18:02.412 "write": true, 00:18:02.412 "unmap": true, 00:18:02.412 "flush": true, 00:18:02.412 "reset": true, 00:18:02.412 "nvme_admin": false, 00:18:02.412 "nvme_io": false, 00:18:02.412 "nvme_io_md": false, 00:18:02.412 "write_zeroes": true, 00:18:02.412 "zcopy": true, 00:18:02.412 "get_zone_info": false, 00:18:02.412 "zone_management": false, 00:18:02.412 "zone_append": false, 00:18:02.412 "compare": false, 00:18:02.412 "compare_and_write": false, 00:18:02.412 "abort": true, 00:18:02.412 "seek_hole": false, 00:18:02.412 "seek_data": false, 00:18:02.412 "copy": true, 00:18:02.412 "nvme_iov_md": false 00:18:02.412 }, 00:18:02.412 "memory_domains": [ 00:18:02.412 { 00:18:02.412 "dma_device_id": "system", 00:18:02.412 "dma_device_type": 1 00:18:02.412 }, 00:18:02.412 { 00:18:02.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.412 "dma_device_type": 2 00:18:02.412 } 00:18:02.412 ], 00:18:02.412 "driver_specific": {} 00:18:02.412 } 00:18:02.412 ] 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.412 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.412 "name": "Existed_Raid", 00:18:02.412 "uuid": "fdaba11b-09d2-4140-b964-bf6d6192a147", 00:18:02.412 "strip_size_kb": 64, 00:18:02.412 "state": "configuring", 00:18:02.412 "raid_level": "raid5f", 00:18:02.412 "superblock": true, 00:18:02.412 "num_base_bdevs": 4, 00:18:02.412 "num_base_bdevs_discovered": 3, 00:18:02.412 "num_base_bdevs_operational": 4, 00:18:02.412 "base_bdevs_list": [ 00:18:02.412 { 00:18:02.412 "name": "BaseBdev1", 00:18:02.412 "uuid": "c3e11a2a-bd51-47fc-ba45-259344a73d19", 00:18:02.412 "is_configured": true, 00:18:02.412 "data_offset": 2048, 00:18:02.412 "data_size": 63488 00:18:02.412 }, 00:18:02.412 { 00:18:02.412 "name": null, 00:18:02.412 "uuid": "09b1575b-151d-4e42-9bd1-157133a18f6b", 00:18:02.412 "is_configured": false, 00:18:02.412 "data_offset": 0, 00:18:02.412 "data_size": 63488 00:18:02.412 }, 00:18:02.412 { 00:18:02.412 "name": "BaseBdev3", 00:18:02.412 "uuid": "3fcbab8e-9a84-4fe9-aff7-934ac370f4e4", 00:18:02.412 "is_configured": true, 00:18:02.412 "data_offset": 2048, 00:18:02.412 "data_size": 63488 00:18:02.413 }, 00:18:02.413 { 00:18:02.413 "name": "BaseBdev4", 00:18:02.413 "uuid": "825e695d-fed8-43ad-a48b-12fcbb9d0b7c", 00:18:02.413 "is_configured": true, 00:18:02.413 "data_offset": 2048, 00:18:02.413 "data_size": 63488 00:18:02.413 } 00:18:02.413 ] 00:18:02.413 }' 00:18:02.413 04:08:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.413 04:08:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.980 [2024-12-06 04:08:56.081343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.980 "name": "Existed_Raid", 00:18:02.980 "uuid": "fdaba11b-09d2-4140-b964-bf6d6192a147", 00:18:02.980 "strip_size_kb": 64, 00:18:02.980 "state": "configuring", 00:18:02.980 "raid_level": "raid5f", 00:18:02.980 "superblock": true, 00:18:02.980 "num_base_bdevs": 4, 00:18:02.980 "num_base_bdevs_discovered": 2, 00:18:02.980 "num_base_bdevs_operational": 4, 00:18:02.980 "base_bdevs_list": [ 00:18:02.980 { 00:18:02.980 "name": "BaseBdev1", 00:18:02.980 "uuid": "c3e11a2a-bd51-47fc-ba45-259344a73d19", 00:18:02.980 "is_configured": true, 00:18:02.980 "data_offset": 2048, 00:18:02.980 "data_size": 63488 00:18:02.980 }, 00:18:02.980 { 00:18:02.980 "name": null, 00:18:02.980 "uuid": "09b1575b-151d-4e42-9bd1-157133a18f6b", 00:18:02.980 "is_configured": false, 00:18:02.980 "data_offset": 0, 00:18:02.980 "data_size": 63488 00:18:02.980 }, 00:18:02.980 { 00:18:02.980 "name": null, 00:18:02.980 "uuid": "3fcbab8e-9a84-4fe9-aff7-934ac370f4e4", 00:18:02.980 "is_configured": false, 00:18:02.980 "data_offset": 0, 00:18:02.980 "data_size": 63488 00:18:02.980 }, 00:18:02.980 { 00:18:02.980 "name": "BaseBdev4", 00:18:02.980 "uuid": "825e695d-fed8-43ad-a48b-12fcbb9d0b7c", 00:18:02.980 "is_configured": true, 00:18:02.980 "data_offset": 2048, 00:18:02.980 "data_size": 63488 00:18:02.980 } 00:18:02.980 ] 00:18:02.980 }' 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.980 04:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.238 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:03.238 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.238 04:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.238 04:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.238 04:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.238 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:03.238 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:03.238 04:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.238 04:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.238 [2024-12-06 04:08:56.580548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:03.238 04:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.238 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:03.238 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:03.238 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:03.238 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:03.238 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.238 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:03.238 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.238 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.238 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.238 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.497 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.497 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.497 04:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.497 04:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.498 04:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.498 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.498 "name": "Existed_Raid", 00:18:03.498 "uuid": "fdaba11b-09d2-4140-b964-bf6d6192a147", 00:18:03.498 "strip_size_kb": 64, 00:18:03.498 "state": "configuring", 00:18:03.498 "raid_level": "raid5f", 00:18:03.498 "superblock": true, 00:18:03.498 "num_base_bdevs": 4, 00:18:03.498 "num_base_bdevs_discovered": 3, 00:18:03.498 "num_base_bdevs_operational": 4, 00:18:03.498 "base_bdevs_list": [ 00:18:03.498 { 00:18:03.498 "name": "BaseBdev1", 00:18:03.498 "uuid": "c3e11a2a-bd51-47fc-ba45-259344a73d19", 00:18:03.498 "is_configured": true, 00:18:03.498 "data_offset": 2048, 00:18:03.498 "data_size": 63488 00:18:03.498 }, 00:18:03.498 { 00:18:03.498 "name": null, 00:18:03.498 "uuid": "09b1575b-151d-4e42-9bd1-157133a18f6b", 00:18:03.498 "is_configured": false, 00:18:03.498 "data_offset": 0, 00:18:03.498 "data_size": 63488 00:18:03.498 }, 00:18:03.498 { 00:18:03.498 "name": "BaseBdev3", 00:18:03.498 "uuid": "3fcbab8e-9a84-4fe9-aff7-934ac370f4e4", 00:18:03.498 "is_configured": true, 00:18:03.498 "data_offset": 2048, 00:18:03.498 "data_size": 63488 00:18:03.498 }, 00:18:03.498 { 00:18:03.498 "name": "BaseBdev4", 00:18:03.498 "uuid": "825e695d-fed8-43ad-a48b-12fcbb9d0b7c", 00:18:03.498 "is_configured": true, 00:18:03.498 "data_offset": 2048, 00:18:03.498 "data_size": 63488 00:18:03.498 } 00:18:03.498 ] 00:18:03.498 }' 00:18:03.498 04:08:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.498 04:08:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.757 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.757 04:08:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.757 04:08:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.757 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:03.757 04:08:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.757 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:03.757 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:03.757 04:08:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.757 04:08:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.757 [2024-12-06 04:08:57.063802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:04.015 04:08:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.015 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:04.015 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:04.015 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:04.015 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.015 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.015 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:04.015 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.015 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.015 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.015 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.015 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.015 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.015 04:08:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.015 04:08:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.015 04:08:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.015 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.015 "name": "Existed_Raid", 00:18:04.015 "uuid": "fdaba11b-09d2-4140-b964-bf6d6192a147", 00:18:04.015 "strip_size_kb": 64, 00:18:04.016 "state": "configuring", 00:18:04.016 "raid_level": "raid5f", 00:18:04.016 "superblock": true, 00:18:04.016 "num_base_bdevs": 4, 00:18:04.016 "num_base_bdevs_discovered": 2, 00:18:04.016 "num_base_bdevs_operational": 4, 00:18:04.016 "base_bdevs_list": [ 00:18:04.016 { 00:18:04.016 "name": null, 00:18:04.016 "uuid": "c3e11a2a-bd51-47fc-ba45-259344a73d19", 00:18:04.016 "is_configured": false, 00:18:04.016 "data_offset": 0, 00:18:04.016 "data_size": 63488 00:18:04.016 }, 00:18:04.016 { 00:18:04.016 "name": null, 00:18:04.016 "uuid": "09b1575b-151d-4e42-9bd1-157133a18f6b", 00:18:04.016 "is_configured": false, 00:18:04.016 "data_offset": 0, 00:18:04.016 "data_size": 63488 00:18:04.016 }, 00:18:04.016 { 00:18:04.016 "name": "BaseBdev3", 00:18:04.016 "uuid": "3fcbab8e-9a84-4fe9-aff7-934ac370f4e4", 00:18:04.016 "is_configured": true, 00:18:04.016 "data_offset": 2048, 00:18:04.016 "data_size": 63488 00:18:04.016 }, 00:18:04.016 { 00:18:04.016 "name": "BaseBdev4", 00:18:04.016 "uuid": "825e695d-fed8-43ad-a48b-12fcbb9d0b7c", 00:18:04.016 "is_configured": true, 00:18:04.016 "data_offset": 2048, 00:18:04.016 "data_size": 63488 00:18:04.016 } 00:18:04.016 ] 00:18:04.016 }' 00:18:04.016 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.016 04:08:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.275 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.275 04:08:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.275 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:04.275 04:08:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.535 [2024-12-06 04:08:57.671696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.535 "name": "Existed_Raid", 00:18:04.535 "uuid": "fdaba11b-09d2-4140-b964-bf6d6192a147", 00:18:04.535 "strip_size_kb": 64, 00:18:04.535 "state": "configuring", 00:18:04.535 "raid_level": "raid5f", 00:18:04.535 "superblock": true, 00:18:04.535 "num_base_bdevs": 4, 00:18:04.535 "num_base_bdevs_discovered": 3, 00:18:04.535 "num_base_bdevs_operational": 4, 00:18:04.535 "base_bdevs_list": [ 00:18:04.535 { 00:18:04.535 "name": null, 00:18:04.535 "uuid": "c3e11a2a-bd51-47fc-ba45-259344a73d19", 00:18:04.535 "is_configured": false, 00:18:04.535 "data_offset": 0, 00:18:04.535 "data_size": 63488 00:18:04.535 }, 00:18:04.535 { 00:18:04.535 "name": "BaseBdev2", 00:18:04.535 "uuid": "09b1575b-151d-4e42-9bd1-157133a18f6b", 00:18:04.535 "is_configured": true, 00:18:04.535 "data_offset": 2048, 00:18:04.535 "data_size": 63488 00:18:04.535 }, 00:18:04.535 { 00:18:04.535 "name": "BaseBdev3", 00:18:04.535 "uuid": "3fcbab8e-9a84-4fe9-aff7-934ac370f4e4", 00:18:04.535 "is_configured": true, 00:18:04.535 "data_offset": 2048, 00:18:04.535 "data_size": 63488 00:18:04.535 }, 00:18:04.535 { 00:18:04.535 "name": "BaseBdev4", 00:18:04.535 "uuid": "825e695d-fed8-43ad-a48b-12fcbb9d0b7c", 00:18:04.535 "is_configured": true, 00:18:04.535 "data_offset": 2048, 00:18:04.535 "data_size": 63488 00:18:04.535 } 00:18:04.535 ] 00:18:04.535 }' 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.535 04:08:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.794 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.794 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.794 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.794 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:04.794 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.794 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:04.794 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:04.794 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.794 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.794 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.053 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.053 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c3e11a2a-bd51-47fc-ba45-259344a73d19 00:18:05.053 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.053 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.054 [2024-12-06 04:08:58.208348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:05.054 [2024-12-06 04:08:58.208615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:05.054 [2024-12-06 04:08:58.208643] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:05.054 NewBaseBdev 00:18:05.054 [2024-12-06 04:08:58.208926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.054 [2024-12-06 04:08:58.217068] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:05.054 [2024-12-06 04:08:58.217098] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:05.054 [2024-12-06 04:08:58.217360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.054 [ 00:18:05.054 { 00:18:05.054 "name": "NewBaseBdev", 00:18:05.054 "aliases": [ 00:18:05.054 "c3e11a2a-bd51-47fc-ba45-259344a73d19" 00:18:05.054 ], 00:18:05.054 "product_name": "Malloc disk", 00:18:05.054 "block_size": 512, 00:18:05.054 "num_blocks": 65536, 00:18:05.054 "uuid": "c3e11a2a-bd51-47fc-ba45-259344a73d19", 00:18:05.054 "assigned_rate_limits": { 00:18:05.054 "rw_ios_per_sec": 0, 00:18:05.054 "rw_mbytes_per_sec": 0, 00:18:05.054 "r_mbytes_per_sec": 0, 00:18:05.054 "w_mbytes_per_sec": 0 00:18:05.054 }, 00:18:05.054 "claimed": true, 00:18:05.054 "claim_type": "exclusive_write", 00:18:05.054 "zoned": false, 00:18:05.054 "supported_io_types": { 00:18:05.054 "read": true, 00:18:05.054 "write": true, 00:18:05.054 "unmap": true, 00:18:05.054 "flush": true, 00:18:05.054 "reset": true, 00:18:05.054 "nvme_admin": false, 00:18:05.054 "nvme_io": false, 00:18:05.054 "nvme_io_md": false, 00:18:05.054 "write_zeroes": true, 00:18:05.054 "zcopy": true, 00:18:05.054 "get_zone_info": false, 00:18:05.054 "zone_management": false, 00:18:05.054 "zone_append": false, 00:18:05.054 "compare": false, 00:18:05.054 "compare_and_write": false, 00:18:05.054 "abort": true, 00:18:05.054 "seek_hole": false, 00:18:05.054 "seek_data": false, 00:18:05.054 "copy": true, 00:18:05.054 "nvme_iov_md": false 00:18:05.054 }, 00:18:05.054 "memory_domains": [ 00:18:05.054 { 00:18:05.054 "dma_device_id": "system", 00:18:05.054 "dma_device_type": 1 00:18:05.054 }, 00:18:05.054 { 00:18:05.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.054 "dma_device_type": 2 00:18:05.054 } 00:18:05.054 ], 00:18:05.054 "driver_specific": {} 00:18:05.054 } 00:18:05.054 ] 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.054 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.054 "name": "Existed_Raid", 00:18:05.054 "uuid": "fdaba11b-09d2-4140-b964-bf6d6192a147", 00:18:05.054 "strip_size_kb": 64, 00:18:05.054 "state": "online", 00:18:05.054 "raid_level": "raid5f", 00:18:05.054 "superblock": true, 00:18:05.054 "num_base_bdevs": 4, 00:18:05.054 "num_base_bdevs_discovered": 4, 00:18:05.054 "num_base_bdevs_operational": 4, 00:18:05.054 "base_bdevs_list": [ 00:18:05.054 { 00:18:05.054 "name": "NewBaseBdev", 00:18:05.054 "uuid": "c3e11a2a-bd51-47fc-ba45-259344a73d19", 00:18:05.054 "is_configured": true, 00:18:05.054 "data_offset": 2048, 00:18:05.054 "data_size": 63488 00:18:05.054 }, 00:18:05.054 { 00:18:05.054 "name": "BaseBdev2", 00:18:05.054 "uuid": "09b1575b-151d-4e42-9bd1-157133a18f6b", 00:18:05.054 "is_configured": true, 00:18:05.054 "data_offset": 2048, 00:18:05.054 "data_size": 63488 00:18:05.054 }, 00:18:05.054 { 00:18:05.054 "name": "BaseBdev3", 00:18:05.054 "uuid": "3fcbab8e-9a84-4fe9-aff7-934ac370f4e4", 00:18:05.054 "is_configured": true, 00:18:05.054 "data_offset": 2048, 00:18:05.054 "data_size": 63488 00:18:05.054 }, 00:18:05.054 { 00:18:05.054 "name": "BaseBdev4", 00:18:05.054 "uuid": "825e695d-fed8-43ad-a48b-12fcbb9d0b7c", 00:18:05.054 "is_configured": true, 00:18:05.055 "data_offset": 2048, 00:18:05.055 "data_size": 63488 00:18:05.055 } 00:18:05.055 ] 00:18:05.055 }' 00:18:05.055 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.055 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.624 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:05.624 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:05.624 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:05.624 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:05.624 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:05.624 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:05.624 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:05.624 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:05.624 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.624 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.624 [2024-12-06 04:08:58.733111] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:05.624 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.624 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:05.624 "name": "Existed_Raid", 00:18:05.624 "aliases": [ 00:18:05.624 "fdaba11b-09d2-4140-b964-bf6d6192a147" 00:18:05.624 ], 00:18:05.624 "product_name": "Raid Volume", 00:18:05.624 "block_size": 512, 00:18:05.624 "num_blocks": 190464, 00:18:05.624 "uuid": "fdaba11b-09d2-4140-b964-bf6d6192a147", 00:18:05.624 "assigned_rate_limits": { 00:18:05.624 "rw_ios_per_sec": 0, 00:18:05.624 "rw_mbytes_per_sec": 0, 00:18:05.624 "r_mbytes_per_sec": 0, 00:18:05.624 "w_mbytes_per_sec": 0 00:18:05.624 }, 00:18:05.624 "claimed": false, 00:18:05.624 "zoned": false, 00:18:05.624 "supported_io_types": { 00:18:05.624 "read": true, 00:18:05.624 "write": true, 00:18:05.624 "unmap": false, 00:18:05.624 "flush": false, 00:18:05.624 "reset": true, 00:18:05.624 "nvme_admin": false, 00:18:05.624 "nvme_io": false, 00:18:05.624 "nvme_io_md": false, 00:18:05.624 "write_zeroes": true, 00:18:05.624 "zcopy": false, 00:18:05.624 "get_zone_info": false, 00:18:05.624 "zone_management": false, 00:18:05.624 "zone_append": false, 00:18:05.624 "compare": false, 00:18:05.624 "compare_and_write": false, 00:18:05.624 "abort": false, 00:18:05.624 "seek_hole": false, 00:18:05.624 "seek_data": false, 00:18:05.624 "copy": false, 00:18:05.624 "nvme_iov_md": false 00:18:05.624 }, 00:18:05.624 "driver_specific": { 00:18:05.624 "raid": { 00:18:05.624 "uuid": "fdaba11b-09d2-4140-b964-bf6d6192a147", 00:18:05.624 "strip_size_kb": 64, 00:18:05.624 "state": "online", 00:18:05.624 "raid_level": "raid5f", 00:18:05.624 "superblock": true, 00:18:05.624 "num_base_bdevs": 4, 00:18:05.624 "num_base_bdevs_discovered": 4, 00:18:05.624 "num_base_bdevs_operational": 4, 00:18:05.625 "base_bdevs_list": [ 00:18:05.625 { 00:18:05.625 "name": "NewBaseBdev", 00:18:05.625 "uuid": "c3e11a2a-bd51-47fc-ba45-259344a73d19", 00:18:05.625 "is_configured": true, 00:18:05.625 "data_offset": 2048, 00:18:05.625 "data_size": 63488 00:18:05.625 }, 00:18:05.625 { 00:18:05.625 "name": "BaseBdev2", 00:18:05.625 "uuid": "09b1575b-151d-4e42-9bd1-157133a18f6b", 00:18:05.625 "is_configured": true, 00:18:05.625 "data_offset": 2048, 00:18:05.625 "data_size": 63488 00:18:05.625 }, 00:18:05.625 { 00:18:05.625 "name": "BaseBdev3", 00:18:05.625 "uuid": "3fcbab8e-9a84-4fe9-aff7-934ac370f4e4", 00:18:05.625 "is_configured": true, 00:18:05.625 "data_offset": 2048, 00:18:05.625 "data_size": 63488 00:18:05.625 }, 00:18:05.625 { 00:18:05.625 "name": "BaseBdev4", 00:18:05.625 "uuid": "825e695d-fed8-43ad-a48b-12fcbb9d0b7c", 00:18:05.625 "is_configured": true, 00:18:05.625 "data_offset": 2048, 00:18:05.625 "data_size": 63488 00:18:05.625 } 00:18:05.625 ] 00:18:05.625 } 00:18:05.625 } 00:18:05.625 }' 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:05.625 BaseBdev2 00:18:05.625 BaseBdev3 00:18:05.625 BaseBdev4' 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:05.625 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.885 04:08:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.885 [2024-12-06 04:08:59.056294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:05.885 [2024-12-06 04:08:59.056335] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:05.885 [2024-12-06 04:08:59.056405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:05.885 [2024-12-06 04:08:59.056716] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:05.885 [2024-12-06 04:08:59.056736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83724 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83724 ']' 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83724 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83724 00:18:05.885 killing process with pid 83724 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83724' 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83724 00:18:05.885 [2024-12-06 04:08:59.106809] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:05.885 04:08:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83724 00:18:06.455 [2024-12-06 04:08:59.518924] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:07.396 ************************************ 00:18:07.396 END TEST raid5f_state_function_test_sb 00:18:07.396 ************************************ 00:18:07.396 04:09:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:07.396 00:18:07.396 real 0m11.460s 00:18:07.396 user 0m18.139s 00:18:07.396 sys 0m2.088s 00:18:07.396 04:09:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:07.396 04:09:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.396 04:09:00 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:18:07.396 04:09:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:07.396 04:09:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:07.396 04:09:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:07.396 ************************************ 00:18:07.396 START TEST raid5f_superblock_test 00:18:07.396 ************************************ 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84389 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84389 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84389 ']' 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.396 04:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.656 [2024-12-06 04:09:00.819384] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:18:07.656 [2024-12-06 04:09:00.819520] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84389 ] 00:18:07.656 [2024-12-06 04:09:00.994693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.916 [2024-12-06 04:09:01.109715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.176 [2024-12-06 04:09:01.317952] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.176 [2024-12-06 04:09:01.318016] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.436 malloc1 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.436 [2024-12-06 04:09:01.774179] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:08.436 [2024-12-06 04:09:01.774242] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.436 [2024-12-06 04:09:01.774278] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:08.436 [2024-12-06 04:09:01.774287] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.436 [2024-12-06 04:09:01.776337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.436 [2024-12-06 04:09:01.776374] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:08.436 pt1 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.436 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.695 malloc2 00:18:08.695 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.695 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:08.695 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.695 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.695 [2024-12-06 04:09:01.829765] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:08.695 [2024-12-06 04:09:01.829828] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.695 [2024-12-06 04:09:01.829853] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:08.695 [2024-12-06 04:09:01.829862] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.695 [2024-12-06 04:09:01.831982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.695 [2024-12-06 04:09:01.832027] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:08.695 pt2 00:18:08.695 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.695 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:08.695 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:08.695 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:08.695 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:08.695 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:08.695 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:08.695 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:08.695 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:08.695 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:08.695 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.695 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.695 malloc3 00:18:08.695 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.695 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.696 [2024-12-06 04:09:01.893977] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:08.696 [2024-12-06 04:09:01.894113] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.696 [2024-12-06 04:09:01.894155] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:08.696 [2024-12-06 04:09:01.894191] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.696 [2024-12-06 04:09:01.896318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.696 [2024-12-06 04:09:01.896388] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:08.696 pt3 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.696 malloc4 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.696 [2024-12-06 04:09:01.954864] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:08.696 [2024-12-06 04:09:01.954984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.696 [2024-12-06 04:09:01.955025] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:08.696 [2024-12-06 04:09:01.955072] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.696 [2024-12-06 04:09:01.957362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.696 [2024-12-06 04:09:01.957453] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:08.696 pt4 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.696 [2024-12-06 04:09:01.970869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:08.696 [2024-12-06 04:09:01.972747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:08.696 [2024-12-06 04:09:01.972879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:08.696 [2024-12-06 04:09:01.972969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:08.696 [2024-12-06 04:09:01.973239] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:08.696 [2024-12-06 04:09:01.973295] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:08.696 [2024-12-06 04:09:01.973601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:08.696 [2024-12-06 04:09:01.981507] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:08.696 [2024-12-06 04:09:01.981585] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:08.696 [2024-12-06 04:09:01.981876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.696 04:09:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.696 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.696 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.696 "name": "raid_bdev1", 00:18:08.696 "uuid": "eebaed14-edf9-4d57-a0b9-0b7336397dc8", 00:18:08.696 "strip_size_kb": 64, 00:18:08.696 "state": "online", 00:18:08.696 "raid_level": "raid5f", 00:18:08.696 "superblock": true, 00:18:08.696 "num_base_bdevs": 4, 00:18:08.696 "num_base_bdevs_discovered": 4, 00:18:08.696 "num_base_bdevs_operational": 4, 00:18:08.696 "base_bdevs_list": [ 00:18:08.696 { 00:18:08.696 "name": "pt1", 00:18:08.696 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:08.696 "is_configured": true, 00:18:08.696 "data_offset": 2048, 00:18:08.696 "data_size": 63488 00:18:08.696 }, 00:18:08.696 { 00:18:08.696 "name": "pt2", 00:18:08.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:08.696 "is_configured": true, 00:18:08.696 "data_offset": 2048, 00:18:08.696 "data_size": 63488 00:18:08.696 }, 00:18:08.696 { 00:18:08.696 "name": "pt3", 00:18:08.696 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:08.696 "is_configured": true, 00:18:08.696 "data_offset": 2048, 00:18:08.696 "data_size": 63488 00:18:08.696 }, 00:18:08.696 { 00:18:08.696 "name": "pt4", 00:18:08.696 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:08.696 "is_configured": true, 00:18:08.696 "data_offset": 2048, 00:18:08.696 "data_size": 63488 00:18:08.696 } 00:18:08.696 ] 00:18:08.696 }' 00:18:08.696 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.696 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.264 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:09.264 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:09.264 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:09.264 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:09.264 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:09.264 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:09.264 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:09.264 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.264 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.264 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:09.264 [2024-12-06 04:09:02.410681] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.264 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.264 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:09.264 "name": "raid_bdev1", 00:18:09.264 "aliases": [ 00:18:09.264 "eebaed14-edf9-4d57-a0b9-0b7336397dc8" 00:18:09.264 ], 00:18:09.264 "product_name": "Raid Volume", 00:18:09.264 "block_size": 512, 00:18:09.264 "num_blocks": 190464, 00:18:09.264 "uuid": "eebaed14-edf9-4d57-a0b9-0b7336397dc8", 00:18:09.264 "assigned_rate_limits": { 00:18:09.264 "rw_ios_per_sec": 0, 00:18:09.264 "rw_mbytes_per_sec": 0, 00:18:09.264 "r_mbytes_per_sec": 0, 00:18:09.264 "w_mbytes_per_sec": 0 00:18:09.264 }, 00:18:09.264 "claimed": false, 00:18:09.264 "zoned": false, 00:18:09.264 "supported_io_types": { 00:18:09.264 "read": true, 00:18:09.264 "write": true, 00:18:09.264 "unmap": false, 00:18:09.264 "flush": false, 00:18:09.264 "reset": true, 00:18:09.264 "nvme_admin": false, 00:18:09.264 "nvme_io": false, 00:18:09.264 "nvme_io_md": false, 00:18:09.264 "write_zeroes": true, 00:18:09.264 "zcopy": false, 00:18:09.264 "get_zone_info": false, 00:18:09.264 "zone_management": false, 00:18:09.264 "zone_append": false, 00:18:09.264 "compare": false, 00:18:09.264 "compare_and_write": false, 00:18:09.264 "abort": false, 00:18:09.264 "seek_hole": false, 00:18:09.264 "seek_data": false, 00:18:09.264 "copy": false, 00:18:09.264 "nvme_iov_md": false 00:18:09.264 }, 00:18:09.264 "driver_specific": { 00:18:09.264 "raid": { 00:18:09.264 "uuid": "eebaed14-edf9-4d57-a0b9-0b7336397dc8", 00:18:09.264 "strip_size_kb": 64, 00:18:09.264 "state": "online", 00:18:09.264 "raid_level": "raid5f", 00:18:09.264 "superblock": true, 00:18:09.264 "num_base_bdevs": 4, 00:18:09.264 "num_base_bdevs_discovered": 4, 00:18:09.264 "num_base_bdevs_operational": 4, 00:18:09.264 "base_bdevs_list": [ 00:18:09.264 { 00:18:09.264 "name": "pt1", 00:18:09.264 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:09.264 "is_configured": true, 00:18:09.264 "data_offset": 2048, 00:18:09.264 "data_size": 63488 00:18:09.264 }, 00:18:09.264 { 00:18:09.264 "name": "pt2", 00:18:09.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:09.264 "is_configured": true, 00:18:09.264 "data_offset": 2048, 00:18:09.264 "data_size": 63488 00:18:09.264 }, 00:18:09.264 { 00:18:09.264 "name": "pt3", 00:18:09.264 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:09.264 "is_configured": true, 00:18:09.265 "data_offset": 2048, 00:18:09.265 "data_size": 63488 00:18:09.265 }, 00:18:09.265 { 00:18:09.265 "name": "pt4", 00:18:09.265 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:09.265 "is_configured": true, 00:18:09.265 "data_offset": 2048, 00:18:09.265 "data_size": 63488 00:18:09.265 } 00:18:09.265 ] 00:18:09.265 } 00:18:09.265 } 00:18:09.265 }' 00:18:09.265 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:09.265 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:09.265 pt2 00:18:09.265 pt3 00:18:09.265 pt4' 00:18:09.265 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.265 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:09.265 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.265 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:09.265 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.265 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.265 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.265 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.265 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:09.265 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:09.265 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.265 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:09.265 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.265 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.265 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.265 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.523 [2024-12-06 04:09:02.766063] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=eebaed14-edf9-4d57-a0b9-0b7336397dc8 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z eebaed14-edf9-4d57-a0b9-0b7336397dc8 ']' 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.523 [2024-12-06 04:09:02.809805] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:09.523 [2024-12-06 04:09:02.809830] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:09.523 [2024-12-06 04:09:02.809903] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:09.523 [2024-12-06 04:09:02.809984] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:09.523 [2024-12-06 04:09:02.809998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.523 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.780 [2024-12-06 04:09:02.981514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:09.780 [2024-12-06 04:09:02.983362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:09.780 [2024-12-06 04:09:02.983449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:09.780 [2024-12-06 04:09:02.983500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:09.780 [2024-12-06 04:09:02.983601] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:09.780 [2024-12-06 04:09:02.983689] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:09.780 [2024-12-06 04:09:02.983751] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:09.780 [2024-12-06 04:09:02.983805] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:09.780 [2024-12-06 04:09:02.983855] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:09.780 [2024-12-06 04:09:02.983888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:09.780 request: 00:18:09.780 { 00:18:09.780 "name": "raid_bdev1", 00:18:09.780 "raid_level": "raid5f", 00:18:09.780 "base_bdevs": [ 00:18:09.780 "malloc1", 00:18:09.780 "malloc2", 00:18:09.780 "malloc3", 00:18:09.780 "malloc4" 00:18:09.780 ], 00:18:09.780 "strip_size_kb": 64, 00:18:09.780 "superblock": false, 00:18:09.780 "method": "bdev_raid_create", 00:18:09.780 "req_id": 1 00:18:09.780 } 00:18:09.780 Got JSON-RPC error response 00:18:09.780 response: 00:18:09.780 { 00:18:09.780 "code": -17, 00:18:09.780 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:09.780 } 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.780 04:09:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.780 04:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.780 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:09.780 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:09.780 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:09.780 04:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.780 04:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.780 [2024-12-06 04:09:03.053366] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:09.780 [2024-12-06 04:09:03.053464] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.780 [2024-12-06 04:09:03.053484] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:09.780 [2024-12-06 04:09:03.053495] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.781 [2024-12-06 04:09:03.055650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.781 [2024-12-06 04:09:03.055689] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:09.781 [2024-12-06 04:09:03.055765] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:09.781 [2024-12-06 04:09:03.055823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:09.781 pt1 00:18:09.781 04:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.781 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:09.781 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.781 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:09.781 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:09.781 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.781 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:09.781 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.781 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.781 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.781 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.781 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.781 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.781 04:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.781 04:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.781 04:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.781 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.781 "name": "raid_bdev1", 00:18:09.781 "uuid": "eebaed14-edf9-4d57-a0b9-0b7336397dc8", 00:18:09.781 "strip_size_kb": 64, 00:18:09.781 "state": "configuring", 00:18:09.781 "raid_level": "raid5f", 00:18:09.781 "superblock": true, 00:18:09.781 "num_base_bdevs": 4, 00:18:09.781 "num_base_bdevs_discovered": 1, 00:18:09.781 "num_base_bdevs_operational": 4, 00:18:09.781 "base_bdevs_list": [ 00:18:09.781 { 00:18:09.781 "name": "pt1", 00:18:09.781 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:09.781 "is_configured": true, 00:18:09.781 "data_offset": 2048, 00:18:09.781 "data_size": 63488 00:18:09.781 }, 00:18:09.781 { 00:18:09.781 "name": null, 00:18:09.781 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:09.781 "is_configured": false, 00:18:09.781 "data_offset": 2048, 00:18:09.781 "data_size": 63488 00:18:09.781 }, 00:18:09.781 { 00:18:09.781 "name": null, 00:18:09.781 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:09.781 "is_configured": false, 00:18:09.781 "data_offset": 2048, 00:18:09.781 "data_size": 63488 00:18:09.781 }, 00:18:09.781 { 00:18:09.781 "name": null, 00:18:09.781 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:09.781 "is_configured": false, 00:18:09.781 "data_offset": 2048, 00:18:09.781 "data_size": 63488 00:18:09.781 } 00:18:09.781 ] 00:18:09.781 }' 00:18:09.781 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.781 04:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.352 [2024-12-06 04:09:03.500677] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:10.352 [2024-12-06 04:09:03.500798] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.352 [2024-12-06 04:09:03.500835] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:10.352 [2024-12-06 04:09:03.500868] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.352 [2024-12-06 04:09:03.501351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.352 [2024-12-06 04:09:03.501413] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:10.352 [2024-12-06 04:09:03.501525] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:10.352 [2024-12-06 04:09:03.501580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:10.352 pt2 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.352 [2024-12-06 04:09:03.512645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.352 "name": "raid_bdev1", 00:18:10.352 "uuid": "eebaed14-edf9-4d57-a0b9-0b7336397dc8", 00:18:10.352 "strip_size_kb": 64, 00:18:10.352 "state": "configuring", 00:18:10.352 "raid_level": "raid5f", 00:18:10.352 "superblock": true, 00:18:10.352 "num_base_bdevs": 4, 00:18:10.352 "num_base_bdevs_discovered": 1, 00:18:10.352 "num_base_bdevs_operational": 4, 00:18:10.352 "base_bdevs_list": [ 00:18:10.352 { 00:18:10.352 "name": "pt1", 00:18:10.352 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:10.352 "is_configured": true, 00:18:10.352 "data_offset": 2048, 00:18:10.352 "data_size": 63488 00:18:10.352 }, 00:18:10.352 { 00:18:10.352 "name": null, 00:18:10.352 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:10.352 "is_configured": false, 00:18:10.352 "data_offset": 0, 00:18:10.352 "data_size": 63488 00:18:10.352 }, 00:18:10.352 { 00:18:10.352 "name": null, 00:18:10.352 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:10.352 "is_configured": false, 00:18:10.352 "data_offset": 2048, 00:18:10.352 "data_size": 63488 00:18:10.352 }, 00:18:10.352 { 00:18:10.352 "name": null, 00:18:10.352 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:10.352 "is_configured": false, 00:18:10.352 "data_offset": 2048, 00:18:10.352 "data_size": 63488 00:18:10.352 } 00:18:10.352 ] 00:18:10.352 }' 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.352 04:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.920 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:10.920 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:10.920 04:09:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:10.920 04:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.920 04:09:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.920 [2024-12-06 04:09:03.999803] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:10.920 [2024-12-06 04:09:03.999876] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.920 [2024-12-06 04:09:03.999896] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:10.920 [2024-12-06 04:09:03.999904] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.920 [2024-12-06 04:09:04.000374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.920 [2024-12-06 04:09:04.000445] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:10.920 [2024-12-06 04:09:04.000546] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:10.920 [2024-12-06 04:09:04.000582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:10.920 pt2 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.920 [2024-12-06 04:09:04.011763] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:10.920 [2024-12-06 04:09:04.011857] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.920 [2024-12-06 04:09:04.011885] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:10.920 [2024-12-06 04:09:04.011896] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.920 [2024-12-06 04:09:04.012311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.920 [2024-12-06 04:09:04.012328] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:10.920 [2024-12-06 04:09:04.012397] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:10.920 [2024-12-06 04:09:04.012423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:10.920 pt3 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.920 [2024-12-06 04:09:04.023712] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:10.920 [2024-12-06 04:09:04.023757] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.920 [2024-12-06 04:09:04.023773] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:10.920 [2024-12-06 04:09:04.023781] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.920 [2024-12-06 04:09:04.024164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.920 [2024-12-06 04:09:04.024181] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:10.920 [2024-12-06 04:09:04.024243] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:10.920 [2024-12-06 04:09:04.024264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:10.920 [2024-12-06 04:09:04.024400] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:10.920 [2024-12-06 04:09:04.024409] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:10.920 [2024-12-06 04:09:04.024661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:10.920 [2024-12-06 04:09:04.031504] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:10.920 [2024-12-06 04:09:04.031525] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:10.920 [2024-12-06 04:09:04.031706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.920 pt4 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.920 "name": "raid_bdev1", 00:18:10.920 "uuid": "eebaed14-edf9-4d57-a0b9-0b7336397dc8", 00:18:10.920 "strip_size_kb": 64, 00:18:10.920 "state": "online", 00:18:10.920 "raid_level": "raid5f", 00:18:10.920 "superblock": true, 00:18:10.920 "num_base_bdevs": 4, 00:18:10.920 "num_base_bdevs_discovered": 4, 00:18:10.920 "num_base_bdevs_operational": 4, 00:18:10.920 "base_bdevs_list": [ 00:18:10.920 { 00:18:10.920 "name": "pt1", 00:18:10.920 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:10.920 "is_configured": true, 00:18:10.920 "data_offset": 2048, 00:18:10.920 "data_size": 63488 00:18:10.920 }, 00:18:10.920 { 00:18:10.920 "name": "pt2", 00:18:10.920 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:10.920 "is_configured": true, 00:18:10.920 "data_offset": 2048, 00:18:10.920 "data_size": 63488 00:18:10.920 }, 00:18:10.920 { 00:18:10.920 "name": "pt3", 00:18:10.920 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:10.920 "is_configured": true, 00:18:10.920 "data_offset": 2048, 00:18:10.920 "data_size": 63488 00:18:10.920 }, 00:18:10.920 { 00:18:10.920 "name": "pt4", 00:18:10.920 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:10.920 "is_configured": true, 00:18:10.920 "data_offset": 2048, 00:18:10.920 "data_size": 63488 00:18:10.920 } 00:18:10.920 ] 00:18:10.920 }' 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.920 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.178 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:11.178 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:11.178 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:11.178 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:11.178 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:11.178 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:11.178 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:11.178 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:11.178 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.178 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.178 [2024-12-06 04:09:04.495543] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:11.178 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.178 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:11.178 "name": "raid_bdev1", 00:18:11.178 "aliases": [ 00:18:11.178 "eebaed14-edf9-4d57-a0b9-0b7336397dc8" 00:18:11.178 ], 00:18:11.178 "product_name": "Raid Volume", 00:18:11.178 "block_size": 512, 00:18:11.178 "num_blocks": 190464, 00:18:11.178 "uuid": "eebaed14-edf9-4d57-a0b9-0b7336397dc8", 00:18:11.178 "assigned_rate_limits": { 00:18:11.178 "rw_ios_per_sec": 0, 00:18:11.178 "rw_mbytes_per_sec": 0, 00:18:11.178 "r_mbytes_per_sec": 0, 00:18:11.178 "w_mbytes_per_sec": 0 00:18:11.178 }, 00:18:11.178 "claimed": false, 00:18:11.178 "zoned": false, 00:18:11.178 "supported_io_types": { 00:18:11.178 "read": true, 00:18:11.178 "write": true, 00:18:11.178 "unmap": false, 00:18:11.178 "flush": false, 00:18:11.178 "reset": true, 00:18:11.178 "nvme_admin": false, 00:18:11.178 "nvme_io": false, 00:18:11.178 "nvme_io_md": false, 00:18:11.178 "write_zeroes": true, 00:18:11.178 "zcopy": false, 00:18:11.178 "get_zone_info": false, 00:18:11.178 "zone_management": false, 00:18:11.178 "zone_append": false, 00:18:11.178 "compare": false, 00:18:11.178 "compare_and_write": false, 00:18:11.178 "abort": false, 00:18:11.178 "seek_hole": false, 00:18:11.178 "seek_data": false, 00:18:11.178 "copy": false, 00:18:11.178 "nvme_iov_md": false 00:18:11.178 }, 00:18:11.178 "driver_specific": { 00:18:11.178 "raid": { 00:18:11.178 "uuid": "eebaed14-edf9-4d57-a0b9-0b7336397dc8", 00:18:11.178 "strip_size_kb": 64, 00:18:11.178 "state": "online", 00:18:11.178 "raid_level": "raid5f", 00:18:11.178 "superblock": true, 00:18:11.178 "num_base_bdevs": 4, 00:18:11.178 "num_base_bdevs_discovered": 4, 00:18:11.178 "num_base_bdevs_operational": 4, 00:18:11.178 "base_bdevs_list": [ 00:18:11.178 { 00:18:11.178 "name": "pt1", 00:18:11.178 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:11.178 "is_configured": true, 00:18:11.178 "data_offset": 2048, 00:18:11.178 "data_size": 63488 00:18:11.178 }, 00:18:11.178 { 00:18:11.178 "name": "pt2", 00:18:11.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:11.178 "is_configured": true, 00:18:11.178 "data_offset": 2048, 00:18:11.178 "data_size": 63488 00:18:11.178 }, 00:18:11.178 { 00:18:11.178 "name": "pt3", 00:18:11.178 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:11.178 "is_configured": true, 00:18:11.178 "data_offset": 2048, 00:18:11.178 "data_size": 63488 00:18:11.178 }, 00:18:11.178 { 00:18:11.178 "name": "pt4", 00:18:11.178 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:11.178 "is_configured": true, 00:18:11.178 "data_offset": 2048, 00:18:11.178 "data_size": 63488 00:18:11.178 } 00:18:11.178 ] 00:18:11.178 } 00:18:11.178 } 00:18:11.178 }' 00:18:11.178 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:11.435 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:11.435 pt2 00:18:11.435 pt3 00:18:11.435 pt4' 00:18:11.435 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.435 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:11.435 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:11.435 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:11.435 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.435 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.435 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.435 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.435 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:11.435 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:11.435 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:11.435 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:11.435 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.435 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.435 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.435 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.435 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:11.435 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:11.435 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:11.435 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:11.435 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.435 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.436 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.436 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.436 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:11.436 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:11.436 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:11.436 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:11.436 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.436 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.436 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.436 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:11.694 [2024-12-06 04:09:04.798980] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' eebaed14-edf9-4d57-a0b9-0b7336397dc8 '!=' eebaed14-edf9-4d57-a0b9-0b7336397dc8 ']' 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.694 [2024-12-06 04:09:04.846772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.694 "name": "raid_bdev1", 00:18:11.694 "uuid": "eebaed14-edf9-4d57-a0b9-0b7336397dc8", 00:18:11.694 "strip_size_kb": 64, 00:18:11.694 "state": "online", 00:18:11.694 "raid_level": "raid5f", 00:18:11.694 "superblock": true, 00:18:11.694 "num_base_bdevs": 4, 00:18:11.694 "num_base_bdevs_discovered": 3, 00:18:11.694 "num_base_bdevs_operational": 3, 00:18:11.694 "base_bdevs_list": [ 00:18:11.694 { 00:18:11.694 "name": null, 00:18:11.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.694 "is_configured": false, 00:18:11.694 "data_offset": 0, 00:18:11.694 "data_size": 63488 00:18:11.694 }, 00:18:11.694 { 00:18:11.694 "name": "pt2", 00:18:11.694 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:11.694 "is_configured": true, 00:18:11.694 "data_offset": 2048, 00:18:11.694 "data_size": 63488 00:18:11.694 }, 00:18:11.694 { 00:18:11.694 "name": "pt3", 00:18:11.694 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:11.694 "is_configured": true, 00:18:11.694 "data_offset": 2048, 00:18:11.694 "data_size": 63488 00:18:11.694 }, 00:18:11.694 { 00:18:11.694 "name": "pt4", 00:18:11.694 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:11.694 "is_configured": true, 00:18:11.694 "data_offset": 2048, 00:18:11.694 "data_size": 63488 00:18:11.694 } 00:18:11.694 ] 00:18:11.694 }' 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.694 04:09:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.950 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:11.950 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.950 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.950 [2024-12-06 04:09:05.270020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:11.950 [2024-12-06 04:09:05.270118] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:11.950 [2024-12-06 04:09:05.270220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.950 [2024-12-06 04:09:05.270343] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:11.950 [2024-12-06 04:09:05.270389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:11.950 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.950 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.950 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:11.950 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.950 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.950 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.208 [2024-12-06 04:09:05.369856] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:12.208 [2024-12-06 04:09:05.369963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.208 [2024-12-06 04:09:05.369989] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:12.208 [2024-12-06 04:09:05.369999] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.208 [2024-12-06 04:09:05.372550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.208 [2024-12-06 04:09:05.372613] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:12.208 [2024-12-06 04:09:05.372706] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:12.208 [2024-12-06 04:09:05.372767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:12.208 pt2 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.208 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.208 "name": "raid_bdev1", 00:18:12.208 "uuid": "eebaed14-edf9-4d57-a0b9-0b7336397dc8", 00:18:12.208 "strip_size_kb": 64, 00:18:12.208 "state": "configuring", 00:18:12.208 "raid_level": "raid5f", 00:18:12.208 "superblock": true, 00:18:12.208 "num_base_bdevs": 4, 00:18:12.208 "num_base_bdevs_discovered": 1, 00:18:12.208 "num_base_bdevs_operational": 3, 00:18:12.209 "base_bdevs_list": [ 00:18:12.209 { 00:18:12.209 "name": null, 00:18:12.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.209 "is_configured": false, 00:18:12.209 "data_offset": 2048, 00:18:12.209 "data_size": 63488 00:18:12.209 }, 00:18:12.209 { 00:18:12.209 "name": "pt2", 00:18:12.209 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.209 "is_configured": true, 00:18:12.209 "data_offset": 2048, 00:18:12.209 "data_size": 63488 00:18:12.209 }, 00:18:12.209 { 00:18:12.209 "name": null, 00:18:12.209 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:12.209 "is_configured": false, 00:18:12.209 "data_offset": 2048, 00:18:12.209 "data_size": 63488 00:18:12.209 }, 00:18:12.209 { 00:18:12.209 "name": null, 00:18:12.209 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:12.209 "is_configured": false, 00:18:12.209 "data_offset": 2048, 00:18:12.209 "data_size": 63488 00:18:12.209 } 00:18:12.209 ] 00:18:12.209 }' 00:18:12.209 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.209 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.775 [2024-12-06 04:09:05.849078] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:12.775 [2024-12-06 04:09:05.849228] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.775 [2024-12-06 04:09:05.849282] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:12.775 [2024-12-06 04:09:05.849313] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.775 [2024-12-06 04:09:05.849795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.775 [2024-12-06 04:09:05.849852] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:12.775 [2024-12-06 04:09:05.849968] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:12.775 [2024-12-06 04:09:05.850019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:12.775 pt3 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.775 "name": "raid_bdev1", 00:18:12.775 "uuid": "eebaed14-edf9-4d57-a0b9-0b7336397dc8", 00:18:12.775 "strip_size_kb": 64, 00:18:12.775 "state": "configuring", 00:18:12.775 "raid_level": "raid5f", 00:18:12.775 "superblock": true, 00:18:12.775 "num_base_bdevs": 4, 00:18:12.775 "num_base_bdevs_discovered": 2, 00:18:12.775 "num_base_bdevs_operational": 3, 00:18:12.775 "base_bdevs_list": [ 00:18:12.775 { 00:18:12.775 "name": null, 00:18:12.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.775 "is_configured": false, 00:18:12.775 "data_offset": 2048, 00:18:12.775 "data_size": 63488 00:18:12.775 }, 00:18:12.775 { 00:18:12.775 "name": "pt2", 00:18:12.775 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.775 "is_configured": true, 00:18:12.775 "data_offset": 2048, 00:18:12.775 "data_size": 63488 00:18:12.775 }, 00:18:12.775 { 00:18:12.775 "name": "pt3", 00:18:12.775 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:12.775 "is_configured": true, 00:18:12.775 "data_offset": 2048, 00:18:12.775 "data_size": 63488 00:18:12.775 }, 00:18:12.775 { 00:18:12.775 "name": null, 00:18:12.775 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:12.775 "is_configured": false, 00:18:12.775 "data_offset": 2048, 00:18:12.775 "data_size": 63488 00:18:12.775 } 00:18:12.775 ] 00:18:12.775 }' 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.775 04:09:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.032 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:13.032 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:13.032 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:18:13.032 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:13.032 04:09:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.032 04:09:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.033 [2024-12-06 04:09:06.312298] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:13.033 [2024-12-06 04:09:06.312409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.033 [2024-12-06 04:09:06.312449] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:13.033 [2024-12-06 04:09:06.312478] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.033 [2024-12-06 04:09:06.312999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.033 [2024-12-06 04:09:06.313079] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:13.033 [2024-12-06 04:09:06.313206] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:13.033 [2024-12-06 04:09:06.313268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:13.033 [2024-12-06 04:09:06.313449] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:13.033 [2024-12-06 04:09:06.313491] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:13.033 [2024-12-06 04:09:06.313789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:13.033 [2024-12-06 04:09:06.321133] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:13.033 [2024-12-06 04:09:06.321193] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:13.033 [2024-12-06 04:09:06.321560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.033 pt4 00:18:13.033 04:09:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.033 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:13.033 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.033 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.033 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:13.033 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:13.033 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:13.033 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.033 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.033 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.033 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.033 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.033 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.033 04:09:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.033 04:09:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.033 04:09:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.033 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.033 "name": "raid_bdev1", 00:18:13.033 "uuid": "eebaed14-edf9-4d57-a0b9-0b7336397dc8", 00:18:13.033 "strip_size_kb": 64, 00:18:13.033 "state": "online", 00:18:13.033 "raid_level": "raid5f", 00:18:13.033 "superblock": true, 00:18:13.033 "num_base_bdevs": 4, 00:18:13.033 "num_base_bdevs_discovered": 3, 00:18:13.033 "num_base_bdevs_operational": 3, 00:18:13.033 "base_bdevs_list": [ 00:18:13.033 { 00:18:13.033 "name": null, 00:18:13.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.033 "is_configured": false, 00:18:13.033 "data_offset": 2048, 00:18:13.033 "data_size": 63488 00:18:13.033 }, 00:18:13.033 { 00:18:13.033 "name": "pt2", 00:18:13.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.033 "is_configured": true, 00:18:13.033 "data_offset": 2048, 00:18:13.033 "data_size": 63488 00:18:13.033 }, 00:18:13.033 { 00:18:13.033 "name": "pt3", 00:18:13.033 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:13.033 "is_configured": true, 00:18:13.033 "data_offset": 2048, 00:18:13.033 "data_size": 63488 00:18:13.033 }, 00:18:13.033 { 00:18:13.033 "name": "pt4", 00:18:13.033 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:13.033 "is_configured": true, 00:18:13.033 "data_offset": 2048, 00:18:13.033 "data_size": 63488 00:18:13.033 } 00:18:13.033 ] 00:18:13.033 }' 00:18:13.033 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.033 04:09:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.600 [2024-12-06 04:09:06.798667] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:13.600 [2024-12-06 04:09:06.798698] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:13.600 [2024-12-06 04:09:06.798775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:13.600 [2024-12-06 04:09:06.798847] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:13.600 [2024-12-06 04:09:06.798859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.600 [2024-12-06 04:09:06.874541] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:13.600 [2024-12-06 04:09:06.874671] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.600 [2024-12-06 04:09:06.874726] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:18:13.600 [2024-12-06 04:09:06.874770] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.600 [2024-12-06 04:09:06.877564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.600 [2024-12-06 04:09:06.877666] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:13.600 [2024-12-06 04:09:06.877811] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:13.600 [2024-12-06 04:09:06.877908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:13.600 [2024-12-06 04:09:06.878104] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:13.600 [2024-12-06 04:09:06.878168] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:13.600 [2024-12-06 04:09:06.878210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:13.600 [2024-12-06 04:09:06.878332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:13.600 [2024-12-06 04:09:06.878488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:13.600 pt1 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.600 "name": "raid_bdev1", 00:18:13.600 "uuid": "eebaed14-edf9-4d57-a0b9-0b7336397dc8", 00:18:13.600 "strip_size_kb": 64, 00:18:13.600 "state": "configuring", 00:18:13.600 "raid_level": "raid5f", 00:18:13.600 "superblock": true, 00:18:13.600 "num_base_bdevs": 4, 00:18:13.600 "num_base_bdevs_discovered": 2, 00:18:13.600 "num_base_bdevs_operational": 3, 00:18:13.600 "base_bdevs_list": [ 00:18:13.600 { 00:18:13.600 "name": null, 00:18:13.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.600 "is_configured": false, 00:18:13.600 "data_offset": 2048, 00:18:13.600 "data_size": 63488 00:18:13.600 }, 00:18:13.600 { 00:18:13.600 "name": "pt2", 00:18:13.600 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.600 "is_configured": true, 00:18:13.600 "data_offset": 2048, 00:18:13.600 "data_size": 63488 00:18:13.600 }, 00:18:13.600 { 00:18:13.600 "name": "pt3", 00:18:13.600 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:13.600 "is_configured": true, 00:18:13.600 "data_offset": 2048, 00:18:13.600 "data_size": 63488 00:18:13.600 }, 00:18:13.600 { 00:18:13.600 "name": null, 00:18:13.600 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:13.600 "is_configured": false, 00:18:13.600 "data_offset": 2048, 00:18:13.600 "data_size": 63488 00:18:13.600 } 00:18:13.600 ] 00:18:13.600 }' 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.600 04:09:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.167 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:14.167 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:14.167 04:09:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.167 04:09:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.167 04:09:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.167 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:14.167 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:14.167 04:09:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.167 04:09:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.167 [2024-12-06 04:09:07.429699] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:14.167 [2024-12-06 04:09:07.429832] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.167 [2024-12-06 04:09:07.429876] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:14.167 [2024-12-06 04:09:07.429910] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.167 [2024-12-06 04:09:07.430445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.167 [2024-12-06 04:09:07.430509] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:14.167 [2024-12-06 04:09:07.430640] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:14.167 [2024-12-06 04:09:07.430696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:14.167 [2024-12-06 04:09:07.430882] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:14.167 [2024-12-06 04:09:07.430924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:14.167 [2024-12-06 04:09:07.431240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:14.168 [2024-12-06 04:09:07.439702] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:14.168 [2024-12-06 04:09:07.439761] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:14.168 [2024-12-06 04:09:07.440133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.168 pt4 00:18:14.168 04:09:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.168 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:14.168 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.168 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.168 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:14.168 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.168 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:14.168 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.168 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.168 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.168 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.168 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.168 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.168 04:09:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.168 04:09:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.168 04:09:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.168 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.168 "name": "raid_bdev1", 00:18:14.168 "uuid": "eebaed14-edf9-4d57-a0b9-0b7336397dc8", 00:18:14.168 "strip_size_kb": 64, 00:18:14.168 "state": "online", 00:18:14.168 "raid_level": "raid5f", 00:18:14.168 "superblock": true, 00:18:14.168 "num_base_bdevs": 4, 00:18:14.168 "num_base_bdevs_discovered": 3, 00:18:14.168 "num_base_bdevs_operational": 3, 00:18:14.168 "base_bdevs_list": [ 00:18:14.168 { 00:18:14.168 "name": null, 00:18:14.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.168 "is_configured": false, 00:18:14.168 "data_offset": 2048, 00:18:14.168 "data_size": 63488 00:18:14.168 }, 00:18:14.168 { 00:18:14.168 "name": "pt2", 00:18:14.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:14.168 "is_configured": true, 00:18:14.168 "data_offset": 2048, 00:18:14.168 "data_size": 63488 00:18:14.168 }, 00:18:14.168 { 00:18:14.168 "name": "pt3", 00:18:14.168 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:14.168 "is_configured": true, 00:18:14.168 "data_offset": 2048, 00:18:14.168 "data_size": 63488 00:18:14.168 }, 00:18:14.168 { 00:18:14.168 "name": "pt4", 00:18:14.168 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:14.168 "is_configured": true, 00:18:14.168 "data_offset": 2048, 00:18:14.168 "data_size": 63488 00:18:14.168 } 00:18:14.168 ] 00:18:14.168 }' 00:18:14.168 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.168 04:09:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.736 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:14.736 04:09:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.736 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:14.736 04:09:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.736 04:09:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.736 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:14.736 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:14.736 04:09:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.736 04:09:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.736 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:14.736 [2024-12-06 04:09:07.941563] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.736 04:09:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.736 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' eebaed14-edf9-4d57-a0b9-0b7336397dc8 '!=' eebaed14-edf9-4d57-a0b9-0b7336397dc8 ']' 00:18:14.736 04:09:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84389 00:18:14.736 04:09:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84389 ']' 00:18:14.736 04:09:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84389 00:18:14.736 04:09:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:18:14.736 04:09:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.736 04:09:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84389 00:18:14.736 killing process with pid 84389 00:18:14.736 04:09:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.736 04:09:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.736 04:09:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84389' 00:18:14.736 04:09:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84389 00:18:14.736 [2024-12-06 04:09:08.019071] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:14.736 [2024-12-06 04:09:08.019169] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.736 [2024-12-06 04:09:08.019253] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.736 04:09:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84389 00:18:14.736 [2024-12-06 04:09:08.019270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:15.302 [2024-12-06 04:09:08.430029] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:16.237 ************************************ 00:18:16.237 END TEST raid5f_superblock_test 00:18:16.237 ************************************ 00:18:16.237 04:09:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:16.237 00:18:16.237 real 0m8.838s 00:18:16.237 user 0m13.927s 00:18:16.237 sys 0m1.601s 00:18:16.237 04:09:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:16.237 04:09:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.496 04:09:09 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:18:16.497 04:09:09 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:18:16.497 04:09:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:16.497 04:09:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:16.497 04:09:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:16.497 ************************************ 00:18:16.497 START TEST raid5f_rebuild_test 00:18:16.497 ************************************ 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84875 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84875 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84875 ']' 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.497 04:09:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.497 [2024-12-06 04:09:09.741343] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:18:16.497 [2024-12-06 04:09:09.741562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84875 ] 00:18:16.497 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:16.497 Zero copy mechanism will not be used. 00:18:16.756 [2024-12-06 04:09:09.917106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.757 [2024-12-06 04:09:10.030882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.016 [2024-12-06 04:09:10.226229] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.016 [2024-12-06 04:09:10.226374] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.275 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.275 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:18:17.275 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:17.275 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:17.275 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.275 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.535 BaseBdev1_malloc 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.535 [2024-12-06 04:09:10.678208] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:17.535 [2024-12-06 04:09:10.678318] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.535 [2024-12-06 04:09:10.678391] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:17.535 [2024-12-06 04:09:10.678429] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.535 [2024-12-06 04:09:10.680555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.535 [2024-12-06 04:09:10.680637] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:17.535 BaseBdev1 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.535 BaseBdev2_malloc 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.535 [2024-12-06 04:09:10.733550] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:17.535 [2024-12-06 04:09:10.733666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.535 [2024-12-06 04:09:10.733713] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:17.535 [2024-12-06 04:09:10.733752] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.535 [2024-12-06 04:09:10.736084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.535 [2024-12-06 04:09:10.736125] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:17.535 BaseBdev2 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.535 BaseBdev3_malloc 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.535 [2024-12-06 04:09:10.805271] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:17.535 [2024-12-06 04:09:10.805375] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.535 [2024-12-06 04:09:10.805431] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:17.535 [2024-12-06 04:09:10.805466] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.535 [2024-12-06 04:09:10.807531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.535 [2024-12-06 04:09:10.807605] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:17.535 BaseBdev3 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.535 BaseBdev4_malloc 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.535 [2024-12-06 04:09:10.864563] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:17.535 [2024-12-06 04:09:10.864692] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.535 [2024-12-06 04:09:10.864756] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:17.535 [2024-12-06 04:09:10.864798] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.535 [2024-12-06 04:09:10.867127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.535 [2024-12-06 04:09:10.867168] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:17.535 BaseBdev4 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.535 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.795 spare_malloc 00:18:17.795 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.795 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:17.795 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.795 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.795 spare_delay 00:18:17.795 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.795 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:17.795 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.795 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.795 [2024-12-06 04:09:10.932731] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:17.795 [2024-12-06 04:09:10.932858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.795 [2024-12-06 04:09:10.932903] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:17.795 [2024-12-06 04:09:10.932946] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.795 [2024-12-06 04:09:10.935325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.795 [2024-12-06 04:09:10.935405] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:17.795 spare 00:18:17.795 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.796 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:17.796 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.796 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.796 [2024-12-06 04:09:10.944778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:17.796 [2024-12-06 04:09:10.946912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:17.796 [2024-12-06 04:09:10.947037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:17.796 [2024-12-06 04:09:10.947155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:17.796 [2024-12-06 04:09:10.947304] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:17.796 [2024-12-06 04:09:10.947353] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:17.796 [2024-12-06 04:09:10.947693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:17.796 [2024-12-06 04:09:10.956239] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:17.796 [2024-12-06 04:09:10.956263] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:17.796 [2024-12-06 04:09:10.956505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.796 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.796 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:17.796 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.796 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.796 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:17.796 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:17.796 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:17.796 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.796 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.796 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.796 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.796 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.796 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.796 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.796 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.796 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.796 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.796 "name": "raid_bdev1", 00:18:17.796 "uuid": "8ebfe49e-40cd-4d62-b1e8-3bd75704e1ed", 00:18:17.796 "strip_size_kb": 64, 00:18:17.796 "state": "online", 00:18:17.796 "raid_level": "raid5f", 00:18:17.796 "superblock": false, 00:18:17.796 "num_base_bdevs": 4, 00:18:17.796 "num_base_bdevs_discovered": 4, 00:18:17.796 "num_base_bdevs_operational": 4, 00:18:17.796 "base_bdevs_list": [ 00:18:17.796 { 00:18:17.796 "name": "BaseBdev1", 00:18:17.796 "uuid": "d55de933-d800-5fde-803b-f529ee9c619c", 00:18:17.796 "is_configured": true, 00:18:17.796 "data_offset": 0, 00:18:17.796 "data_size": 65536 00:18:17.796 }, 00:18:17.796 { 00:18:17.796 "name": "BaseBdev2", 00:18:17.796 "uuid": "fe095671-bb2a-5e81-9e18-4f3639595db1", 00:18:17.796 "is_configured": true, 00:18:17.796 "data_offset": 0, 00:18:17.796 "data_size": 65536 00:18:17.796 }, 00:18:17.796 { 00:18:17.796 "name": "BaseBdev3", 00:18:17.796 "uuid": "3781e6a1-e03b-521f-8e4b-795520c25494", 00:18:17.796 "is_configured": true, 00:18:17.796 "data_offset": 0, 00:18:17.796 "data_size": 65536 00:18:17.796 }, 00:18:17.796 { 00:18:17.796 "name": "BaseBdev4", 00:18:17.796 "uuid": "c264e240-1fa1-5859-b0d5-785f7020eb5d", 00:18:17.796 "is_configured": true, 00:18:17.796 "data_offset": 0, 00:18:17.796 "data_size": 65536 00:18:17.796 } 00:18:17.796 ] 00:18:17.796 }' 00:18:17.796 04:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.796 04:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.054 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:18.054 04:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.054 04:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.054 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:18.054 [2024-12-06 04:09:11.381983] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:18.054 04:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.314 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:18:18.314 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.314 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:18.314 04:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.314 04:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.314 04:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.314 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:18.314 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:18.314 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:18.314 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:18.314 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:18.314 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:18.314 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:18.314 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:18.314 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:18.314 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:18.314 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:18.314 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:18.314 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:18.314 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:18.573 [2024-12-06 04:09:11.693300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:18.573 /dev/nbd0 00:18:18.573 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:18.573 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:18.573 04:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:18.573 04:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:18.573 04:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:18.573 04:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:18.573 04:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:18.573 04:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:18.573 04:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:18.573 04:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:18.573 04:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:18.573 1+0 records in 00:18:18.573 1+0 records out 00:18:18.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000537178 s, 7.6 MB/s 00:18:18.573 04:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.573 04:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:18.573 04:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.573 04:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:18.573 04:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:18.574 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:18.574 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:18.574 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:18.574 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:18.574 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:18.574 04:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:18:19.172 512+0 records in 00:18:19.172 512+0 records out 00:18:19.172 100663296 bytes (101 MB, 96 MiB) copied, 0.604799 s, 166 MB/s 00:18:19.172 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:19.172 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:19.172 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:19.172 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:19.172 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:19.172 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:19.172 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:19.432 [2024-12-06 04:09:12.615564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.432 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:19.432 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:19.432 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:19.432 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:19.432 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:19.432 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:19.432 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:19.432 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:19.432 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:19.432 04:09:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.432 04:09:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.432 [2024-12-06 04:09:12.635408] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:19.432 04:09:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.432 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:19.432 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.433 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.433 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:19.433 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:19.433 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:19.433 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.433 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.433 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.433 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.433 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.433 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.433 04:09:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.433 04:09:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.433 04:09:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.433 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.433 "name": "raid_bdev1", 00:18:19.433 "uuid": "8ebfe49e-40cd-4d62-b1e8-3bd75704e1ed", 00:18:19.433 "strip_size_kb": 64, 00:18:19.433 "state": "online", 00:18:19.433 "raid_level": "raid5f", 00:18:19.433 "superblock": false, 00:18:19.433 "num_base_bdevs": 4, 00:18:19.433 "num_base_bdevs_discovered": 3, 00:18:19.433 "num_base_bdevs_operational": 3, 00:18:19.433 "base_bdevs_list": [ 00:18:19.433 { 00:18:19.433 "name": null, 00:18:19.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.433 "is_configured": false, 00:18:19.433 "data_offset": 0, 00:18:19.433 "data_size": 65536 00:18:19.433 }, 00:18:19.433 { 00:18:19.433 "name": "BaseBdev2", 00:18:19.433 "uuid": "fe095671-bb2a-5e81-9e18-4f3639595db1", 00:18:19.433 "is_configured": true, 00:18:19.433 "data_offset": 0, 00:18:19.433 "data_size": 65536 00:18:19.433 }, 00:18:19.433 { 00:18:19.433 "name": "BaseBdev3", 00:18:19.433 "uuid": "3781e6a1-e03b-521f-8e4b-795520c25494", 00:18:19.433 "is_configured": true, 00:18:19.433 "data_offset": 0, 00:18:19.433 "data_size": 65536 00:18:19.433 }, 00:18:19.433 { 00:18:19.433 "name": "BaseBdev4", 00:18:19.433 "uuid": "c264e240-1fa1-5859-b0d5-785f7020eb5d", 00:18:19.433 "is_configured": true, 00:18:19.433 "data_offset": 0, 00:18:19.433 "data_size": 65536 00:18:19.433 } 00:18:19.433 ] 00:18:19.433 }' 00:18:19.433 04:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.433 04:09:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.003 04:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:20.003 04:09:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.003 04:09:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.003 [2024-12-06 04:09:13.130568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:20.003 [2024-12-06 04:09:13.147528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:20.003 04:09:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.003 04:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:20.003 [2024-12-06 04:09:13.158109] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:20.943 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.943 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.943 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.943 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.943 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.943 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.943 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.943 04:09:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.943 04:09:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.943 04:09:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.943 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.943 "name": "raid_bdev1", 00:18:20.943 "uuid": "8ebfe49e-40cd-4d62-b1e8-3bd75704e1ed", 00:18:20.943 "strip_size_kb": 64, 00:18:20.943 "state": "online", 00:18:20.943 "raid_level": "raid5f", 00:18:20.943 "superblock": false, 00:18:20.943 "num_base_bdevs": 4, 00:18:20.943 "num_base_bdevs_discovered": 4, 00:18:20.943 "num_base_bdevs_operational": 4, 00:18:20.943 "process": { 00:18:20.943 "type": "rebuild", 00:18:20.943 "target": "spare", 00:18:20.943 "progress": { 00:18:20.943 "blocks": 19200, 00:18:20.943 "percent": 9 00:18:20.943 } 00:18:20.943 }, 00:18:20.943 "base_bdevs_list": [ 00:18:20.943 { 00:18:20.943 "name": "spare", 00:18:20.943 "uuid": "b5c99b84-9ebd-5434-9774-dbdc73f95702", 00:18:20.943 "is_configured": true, 00:18:20.943 "data_offset": 0, 00:18:20.943 "data_size": 65536 00:18:20.943 }, 00:18:20.943 { 00:18:20.943 "name": "BaseBdev2", 00:18:20.943 "uuid": "fe095671-bb2a-5e81-9e18-4f3639595db1", 00:18:20.943 "is_configured": true, 00:18:20.943 "data_offset": 0, 00:18:20.943 "data_size": 65536 00:18:20.943 }, 00:18:20.943 { 00:18:20.943 "name": "BaseBdev3", 00:18:20.943 "uuid": "3781e6a1-e03b-521f-8e4b-795520c25494", 00:18:20.943 "is_configured": true, 00:18:20.943 "data_offset": 0, 00:18:20.943 "data_size": 65536 00:18:20.943 }, 00:18:20.943 { 00:18:20.943 "name": "BaseBdev4", 00:18:20.943 "uuid": "c264e240-1fa1-5859-b0d5-785f7020eb5d", 00:18:20.943 "is_configured": true, 00:18:20.943 "data_offset": 0, 00:18:20.943 "data_size": 65536 00:18:20.943 } 00:18:20.943 ] 00:18:20.943 }' 00:18:20.943 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.943 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.943 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.943 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.943 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:20.943 04:09:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.943 04:09:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.204 [2024-12-06 04:09:14.297492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:21.204 [2024-12-06 04:09:14.367464] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:21.204 [2024-12-06 04:09:14.367695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.204 [2024-12-06 04:09:14.367755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:21.204 [2024-12-06 04:09:14.367802] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:21.204 04:09:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.204 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:21.204 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.204 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.204 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:21.204 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:21.204 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:21.204 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.204 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.204 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.204 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.204 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.204 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.204 04:09:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.204 04:09:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.204 04:09:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.204 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.204 "name": "raid_bdev1", 00:18:21.204 "uuid": "8ebfe49e-40cd-4d62-b1e8-3bd75704e1ed", 00:18:21.204 "strip_size_kb": 64, 00:18:21.204 "state": "online", 00:18:21.204 "raid_level": "raid5f", 00:18:21.204 "superblock": false, 00:18:21.204 "num_base_bdevs": 4, 00:18:21.204 "num_base_bdevs_discovered": 3, 00:18:21.204 "num_base_bdevs_operational": 3, 00:18:21.204 "base_bdevs_list": [ 00:18:21.204 { 00:18:21.204 "name": null, 00:18:21.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.204 "is_configured": false, 00:18:21.204 "data_offset": 0, 00:18:21.204 "data_size": 65536 00:18:21.204 }, 00:18:21.204 { 00:18:21.204 "name": "BaseBdev2", 00:18:21.204 "uuid": "fe095671-bb2a-5e81-9e18-4f3639595db1", 00:18:21.204 "is_configured": true, 00:18:21.204 "data_offset": 0, 00:18:21.204 "data_size": 65536 00:18:21.204 }, 00:18:21.204 { 00:18:21.204 "name": "BaseBdev3", 00:18:21.204 "uuid": "3781e6a1-e03b-521f-8e4b-795520c25494", 00:18:21.204 "is_configured": true, 00:18:21.204 "data_offset": 0, 00:18:21.204 "data_size": 65536 00:18:21.204 }, 00:18:21.204 { 00:18:21.204 "name": "BaseBdev4", 00:18:21.204 "uuid": "c264e240-1fa1-5859-b0d5-785f7020eb5d", 00:18:21.204 "is_configured": true, 00:18:21.204 "data_offset": 0, 00:18:21.204 "data_size": 65536 00:18:21.204 } 00:18:21.204 ] 00:18:21.204 }' 00:18:21.204 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.204 04:09:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.774 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:21.774 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.774 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:21.774 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:21.774 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.774 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.774 04:09:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.774 04:09:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.774 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.774 04:09:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.774 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.774 "name": "raid_bdev1", 00:18:21.774 "uuid": "8ebfe49e-40cd-4d62-b1e8-3bd75704e1ed", 00:18:21.774 "strip_size_kb": 64, 00:18:21.774 "state": "online", 00:18:21.774 "raid_level": "raid5f", 00:18:21.774 "superblock": false, 00:18:21.774 "num_base_bdevs": 4, 00:18:21.774 "num_base_bdevs_discovered": 3, 00:18:21.774 "num_base_bdevs_operational": 3, 00:18:21.774 "base_bdevs_list": [ 00:18:21.774 { 00:18:21.774 "name": null, 00:18:21.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.774 "is_configured": false, 00:18:21.774 "data_offset": 0, 00:18:21.774 "data_size": 65536 00:18:21.774 }, 00:18:21.774 { 00:18:21.774 "name": "BaseBdev2", 00:18:21.774 "uuid": "fe095671-bb2a-5e81-9e18-4f3639595db1", 00:18:21.774 "is_configured": true, 00:18:21.774 "data_offset": 0, 00:18:21.774 "data_size": 65536 00:18:21.774 }, 00:18:21.774 { 00:18:21.774 "name": "BaseBdev3", 00:18:21.774 "uuid": "3781e6a1-e03b-521f-8e4b-795520c25494", 00:18:21.774 "is_configured": true, 00:18:21.774 "data_offset": 0, 00:18:21.774 "data_size": 65536 00:18:21.774 }, 00:18:21.774 { 00:18:21.774 "name": "BaseBdev4", 00:18:21.774 "uuid": "c264e240-1fa1-5859-b0d5-785f7020eb5d", 00:18:21.774 "is_configured": true, 00:18:21.774 "data_offset": 0, 00:18:21.774 "data_size": 65536 00:18:21.774 } 00:18:21.774 ] 00:18:21.774 }' 00:18:21.774 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.774 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:21.774 04:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.774 04:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:21.774 04:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:21.774 04:09:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.774 04:09:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.774 [2024-12-06 04:09:15.026906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:21.774 [2024-12-06 04:09:15.044503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:18:21.774 04:09:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.774 04:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:21.774 [2024-12-06 04:09:15.055784] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:22.711 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.711 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.711 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.711 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.711 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.711 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.711 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.711 04:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.711 04:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.970 04:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.970 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.970 "name": "raid_bdev1", 00:18:22.970 "uuid": "8ebfe49e-40cd-4d62-b1e8-3bd75704e1ed", 00:18:22.970 "strip_size_kb": 64, 00:18:22.970 "state": "online", 00:18:22.970 "raid_level": "raid5f", 00:18:22.970 "superblock": false, 00:18:22.970 "num_base_bdevs": 4, 00:18:22.970 "num_base_bdevs_discovered": 4, 00:18:22.970 "num_base_bdevs_operational": 4, 00:18:22.970 "process": { 00:18:22.970 "type": "rebuild", 00:18:22.970 "target": "spare", 00:18:22.970 "progress": { 00:18:22.970 "blocks": 17280, 00:18:22.970 "percent": 8 00:18:22.970 } 00:18:22.970 }, 00:18:22.970 "base_bdevs_list": [ 00:18:22.970 { 00:18:22.970 "name": "spare", 00:18:22.970 "uuid": "b5c99b84-9ebd-5434-9774-dbdc73f95702", 00:18:22.970 "is_configured": true, 00:18:22.970 "data_offset": 0, 00:18:22.970 "data_size": 65536 00:18:22.970 }, 00:18:22.970 { 00:18:22.970 "name": "BaseBdev2", 00:18:22.970 "uuid": "fe095671-bb2a-5e81-9e18-4f3639595db1", 00:18:22.970 "is_configured": true, 00:18:22.970 "data_offset": 0, 00:18:22.970 "data_size": 65536 00:18:22.970 }, 00:18:22.970 { 00:18:22.970 "name": "BaseBdev3", 00:18:22.970 "uuid": "3781e6a1-e03b-521f-8e4b-795520c25494", 00:18:22.970 "is_configured": true, 00:18:22.970 "data_offset": 0, 00:18:22.970 "data_size": 65536 00:18:22.970 }, 00:18:22.970 { 00:18:22.970 "name": "BaseBdev4", 00:18:22.970 "uuid": "c264e240-1fa1-5859-b0d5-785f7020eb5d", 00:18:22.970 "is_configured": true, 00:18:22.970 "data_offset": 0, 00:18:22.970 "data_size": 65536 00:18:22.970 } 00:18:22.970 ] 00:18:22.970 }' 00:18:22.970 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.970 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.970 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.970 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.970 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:22.970 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:22.970 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:22.970 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=634 00:18:22.970 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:22.970 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.970 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.970 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.970 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.970 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.970 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.970 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.970 04:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.970 04:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.970 04:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.970 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.970 "name": "raid_bdev1", 00:18:22.970 "uuid": "8ebfe49e-40cd-4d62-b1e8-3bd75704e1ed", 00:18:22.970 "strip_size_kb": 64, 00:18:22.970 "state": "online", 00:18:22.970 "raid_level": "raid5f", 00:18:22.970 "superblock": false, 00:18:22.971 "num_base_bdevs": 4, 00:18:22.971 "num_base_bdevs_discovered": 4, 00:18:22.971 "num_base_bdevs_operational": 4, 00:18:22.971 "process": { 00:18:22.971 "type": "rebuild", 00:18:22.971 "target": "spare", 00:18:22.971 "progress": { 00:18:22.971 "blocks": 21120, 00:18:22.971 "percent": 10 00:18:22.971 } 00:18:22.971 }, 00:18:22.971 "base_bdevs_list": [ 00:18:22.971 { 00:18:22.971 "name": "spare", 00:18:22.971 "uuid": "b5c99b84-9ebd-5434-9774-dbdc73f95702", 00:18:22.971 "is_configured": true, 00:18:22.971 "data_offset": 0, 00:18:22.971 "data_size": 65536 00:18:22.971 }, 00:18:22.971 { 00:18:22.971 "name": "BaseBdev2", 00:18:22.971 "uuid": "fe095671-bb2a-5e81-9e18-4f3639595db1", 00:18:22.971 "is_configured": true, 00:18:22.971 "data_offset": 0, 00:18:22.971 "data_size": 65536 00:18:22.971 }, 00:18:22.971 { 00:18:22.971 "name": "BaseBdev3", 00:18:22.971 "uuid": "3781e6a1-e03b-521f-8e4b-795520c25494", 00:18:22.971 "is_configured": true, 00:18:22.971 "data_offset": 0, 00:18:22.971 "data_size": 65536 00:18:22.971 }, 00:18:22.971 { 00:18:22.971 "name": "BaseBdev4", 00:18:22.971 "uuid": "c264e240-1fa1-5859-b0d5-785f7020eb5d", 00:18:22.971 "is_configured": true, 00:18:22.971 "data_offset": 0, 00:18:22.971 "data_size": 65536 00:18:22.971 } 00:18:22.971 ] 00:18:22.971 }' 00:18:22.971 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.971 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.971 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.971 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.971 04:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:24.351 04:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:24.351 04:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:24.351 04:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.351 04:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:24.352 04:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:24.352 04:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.352 04:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.352 04:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.352 04:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.352 04:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.352 04:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.352 04:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.352 "name": "raid_bdev1", 00:18:24.352 "uuid": "8ebfe49e-40cd-4d62-b1e8-3bd75704e1ed", 00:18:24.352 "strip_size_kb": 64, 00:18:24.352 "state": "online", 00:18:24.352 "raid_level": "raid5f", 00:18:24.352 "superblock": false, 00:18:24.352 "num_base_bdevs": 4, 00:18:24.352 "num_base_bdevs_discovered": 4, 00:18:24.352 "num_base_bdevs_operational": 4, 00:18:24.352 "process": { 00:18:24.352 "type": "rebuild", 00:18:24.352 "target": "spare", 00:18:24.352 "progress": { 00:18:24.352 "blocks": 42240, 00:18:24.352 "percent": 21 00:18:24.352 } 00:18:24.352 }, 00:18:24.352 "base_bdevs_list": [ 00:18:24.352 { 00:18:24.352 "name": "spare", 00:18:24.352 "uuid": "b5c99b84-9ebd-5434-9774-dbdc73f95702", 00:18:24.352 "is_configured": true, 00:18:24.352 "data_offset": 0, 00:18:24.352 "data_size": 65536 00:18:24.352 }, 00:18:24.352 { 00:18:24.352 "name": "BaseBdev2", 00:18:24.352 "uuid": "fe095671-bb2a-5e81-9e18-4f3639595db1", 00:18:24.352 "is_configured": true, 00:18:24.352 "data_offset": 0, 00:18:24.352 "data_size": 65536 00:18:24.352 }, 00:18:24.352 { 00:18:24.352 "name": "BaseBdev3", 00:18:24.352 "uuid": "3781e6a1-e03b-521f-8e4b-795520c25494", 00:18:24.352 "is_configured": true, 00:18:24.352 "data_offset": 0, 00:18:24.352 "data_size": 65536 00:18:24.352 }, 00:18:24.352 { 00:18:24.352 "name": "BaseBdev4", 00:18:24.352 "uuid": "c264e240-1fa1-5859-b0d5-785f7020eb5d", 00:18:24.352 "is_configured": true, 00:18:24.352 "data_offset": 0, 00:18:24.352 "data_size": 65536 00:18:24.352 } 00:18:24.352 ] 00:18:24.352 }' 00:18:24.352 04:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.352 04:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.352 04:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.352 04:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.352 04:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:25.292 04:09:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:25.292 04:09:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.292 04:09:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.292 04:09:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.292 04:09:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.292 04:09:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.292 04:09:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.292 04:09:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.292 04:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.292 04:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.292 04:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.292 04:09:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.292 "name": "raid_bdev1", 00:18:25.292 "uuid": "8ebfe49e-40cd-4d62-b1e8-3bd75704e1ed", 00:18:25.292 "strip_size_kb": 64, 00:18:25.292 "state": "online", 00:18:25.292 "raid_level": "raid5f", 00:18:25.292 "superblock": false, 00:18:25.292 "num_base_bdevs": 4, 00:18:25.292 "num_base_bdevs_discovered": 4, 00:18:25.292 "num_base_bdevs_operational": 4, 00:18:25.292 "process": { 00:18:25.292 "type": "rebuild", 00:18:25.292 "target": "spare", 00:18:25.292 "progress": { 00:18:25.292 "blocks": 65280, 00:18:25.292 "percent": 33 00:18:25.292 } 00:18:25.292 }, 00:18:25.292 "base_bdevs_list": [ 00:18:25.292 { 00:18:25.292 "name": "spare", 00:18:25.292 "uuid": "b5c99b84-9ebd-5434-9774-dbdc73f95702", 00:18:25.292 "is_configured": true, 00:18:25.292 "data_offset": 0, 00:18:25.292 "data_size": 65536 00:18:25.292 }, 00:18:25.292 { 00:18:25.292 "name": "BaseBdev2", 00:18:25.292 "uuid": "fe095671-bb2a-5e81-9e18-4f3639595db1", 00:18:25.292 "is_configured": true, 00:18:25.292 "data_offset": 0, 00:18:25.292 "data_size": 65536 00:18:25.292 }, 00:18:25.292 { 00:18:25.292 "name": "BaseBdev3", 00:18:25.292 "uuid": "3781e6a1-e03b-521f-8e4b-795520c25494", 00:18:25.292 "is_configured": true, 00:18:25.293 "data_offset": 0, 00:18:25.293 "data_size": 65536 00:18:25.293 }, 00:18:25.293 { 00:18:25.293 "name": "BaseBdev4", 00:18:25.293 "uuid": "c264e240-1fa1-5859-b0d5-785f7020eb5d", 00:18:25.293 "is_configured": true, 00:18:25.293 "data_offset": 0, 00:18:25.293 "data_size": 65536 00:18:25.293 } 00:18:25.293 ] 00:18:25.293 }' 00:18:25.293 04:09:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.293 04:09:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:25.293 04:09:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.293 04:09:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:25.293 04:09:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:26.674 04:09:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:26.674 04:09:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:26.674 04:09:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.674 04:09:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:26.674 04:09:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:26.674 04:09:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.674 04:09:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.674 04:09:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.674 04:09:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.674 04:09:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.674 04:09:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.674 04:09:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.674 "name": "raid_bdev1", 00:18:26.674 "uuid": "8ebfe49e-40cd-4d62-b1e8-3bd75704e1ed", 00:18:26.674 "strip_size_kb": 64, 00:18:26.674 "state": "online", 00:18:26.674 "raid_level": "raid5f", 00:18:26.674 "superblock": false, 00:18:26.674 "num_base_bdevs": 4, 00:18:26.674 "num_base_bdevs_discovered": 4, 00:18:26.674 "num_base_bdevs_operational": 4, 00:18:26.674 "process": { 00:18:26.674 "type": "rebuild", 00:18:26.674 "target": "spare", 00:18:26.674 "progress": { 00:18:26.674 "blocks": 86400, 00:18:26.674 "percent": 43 00:18:26.674 } 00:18:26.674 }, 00:18:26.674 "base_bdevs_list": [ 00:18:26.674 { 00:18:26.674 "name": "spare", 00:18:26.674 "uuid": "b5c99b84-9ebd-5434-9774-dbdc73f95702", 00:18:26.674 "is_configured": true, 00:18:26.674 "data_offset": 0, 00:18:26.674 "data_size": 65536 00:18:26.674 }, 00:18:26.674 { 00:18:26.674 "name": "BaseBdev2", 00:18:26.674 "uuid": "fe095671-bb2a-5e81-9e18-4f3639595db1", 00:18:26.674 "is_configured": true, 00:18:26.674 "data_offset": 0, 00:18:26.674 "data_size": 65536 00:18:26.674 }, 00:18:26.674 { 00:18:26.674 "name": "BaseBdev3", 00:18:26.674 "uuid": "3781e6a1-e03b-521f-8e4b-795520c25494", 00:18:26.674 "is_configured": true, 00:18:26.674 "data_offset": 0, 00:18:26.674 "data_size": 65536 00:18:26.674 }, 00:18:26.674 { 00:18:26.674 "name": "BaseBdev4", 00:18:26.674 "uuid": "c264e240-1fa1-5859-b0d5-785f7020eb5d", 00:18:26.674 "is_configured": true, 00:18:26.674 "data_offset": 0, 00:18:26.674 "data_size": 65536 00:18:26.674 } 00:18:26.674 ] 00:18:26.674 }' 00:18:26.674 04:09:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.674 04:09:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:26.674 04:09:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.674 04:09:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:26.674 04:09:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:27.614 04:09:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:27.614 04:09:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:27.614 04:09:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.614 04:09:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:27.614 04:09:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:27.614 04:09:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.614 04:09:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.614 04:09:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.614 04:09:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.614 04:09:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.614 04:09:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.614 04:09:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.614 "name": "raid_bdev1", 00:18:27.614 "uuid": "8ebfe49e-40cd-4d62-b1e8-3bd75704e1ed", 00:18:27.614 "strip_size_kb": 64, 00:18:27.614 "state": "online", 00:18:27.614 "raid_level": "raid5f", 00:18:27.614 "superblock": false, 00:18:27.614 "num_base_bdevs": 4, 00:18:27.614 "num_base_bdevs_discovered": 4, 00:18:27.614 "num_base_bdevs_operational": 4, 00:18:27.614 "process": { 00:18:27.614 "type": "rebuild", 00:18:27.614 "target": "spare", 00:18:27.614 "progress": { 00:18:27.614 "blocks": 107520, 00:18:27.614 "percent": 54 00:18:27.614 } 00:18:27.614 }, 00:18:27.614 "base_bdevs_list": [ 00:18:27.614 { 00:18:27.614 "name": "spare", 00:18:27.614 "uuid": "b5c99b84-9ebd-5434-9774-dbdc73f95702", 00:18:27.614 "is_configured": true, 00:18:27.614 "data_offset": 0, 00:18:27.614 "data_size": 65536 00:18:27.614 }, 00:18:27.614 { 00:18:27.614 "name": "BaseBdev2", 00:18:27.614 "uuid": "fe095671-bb2a-5e81-9e18-4f3639595db1", 00:18:27.614 "is_configured": true, 00:18:27.614 "data_offset": 0, 00:18:27.614 "data_size": 65536 00:18:27.614 }, 00:18:27.614 { 00:18:27.614 "name": "BaseBdev3", 00:18:27.614 "uuid": "3781e6a1-e03b-521f-8e4b-795520c25494", 00:18:27.614 "is_configured": true, 00:18:27.614 "data_offset": 0, 00:18:27.614 "data_size": 65536 00:18:27.614 }, 00:18:27.614 { 00:18:27.614 "name": "BaseBdev4", 00:18:27.614 "uuid": "c264e240-1fa1-5859-b0d5-785f7020eb5d", 00:18:27.614 "is_configured": true, 00:18:27.614 "data_offset": 0, 00:18:27.614 "data_size": 65536 00:18:27.614 } 00:18:27.614 ] 00:18:27.614 }' 00:18:27.614 04:09:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.614 04:09:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:27.614 04:09:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.614 04:09:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:27.614 04:09:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:28.995 04:09:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:28.995 04:09:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.995 04:09:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.995 04:09:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.995 04:09:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.995 04:09:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.995 04:09:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.995 04:09:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.995 04:09:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.995 04:09:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.995 04:09:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.995 04:09:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.995 "name": "raid_bdev1", 00:18:28.995 "uuid": "8ebfe49e-40cd-4d62-b1e8-3bd75704e1ed", 00:18:28.995 "strip_size_kb": 64, 00:18:28.995 "state": "online", 00:18:28.995 "raid_level": "raid5f", 00:18:28.995 "superblock": false, 00:18:28.995 "num_base_bdevs": 4, 00:18:28.995 "num_base_bdevs_discovered": 4, 00:18:28.995 "num_base_bdevs_operational": 4, 00:18:28.995 "process": { 00:18:28.995 "type": "rebuild", 00:18:28.995 "target": "spare", 00:18:28.995 "progress": { 00:18:28.995 "blocks": 130560, 00:18:28.996 "percent": 66 00:18:28.996 } 00:18:28.996 }, 00:18:28.996 "base_bdevs_list": [ 00:18:28.996 { 00:18:28.996 "name": "spare", 00:18:28.996 "uuid": "b5c99b84-9ebd-5434-9774-dbdc73f95702", 00:18:28.996 "is_configured": true, 00:18:28.996 "data_offset": 0, 00:18:28.996 "data_size": 65536 00:18:28.996 }, 00:18:28.996 { 00:18:28.996 "name": "BaseBdev2", 00:18:28.996 "uuid": "fe095671-bb2a-5e81-9e18-4f3639595db1", 00:18:28.996 "is_configured": true, 00:18:28.996 "data_offset": 0, 00:18:28.996 "data_size": 65536 00:18:28.996 }, 00:18:28.996 { 00:18:28.996 "name": "BaseBdev3", 00:18:28.996 "uuid": "3781e6a1-e03b-521f-8e4b-795520c25494", 00:18:28.996 "is_configured": true, 00:18:28.996 "data_offset": 0, 00:18:28.996 "data_size": 65536 00:18:28.996 }, 00:18:28.996 { 00:18:28.996 "name": "BaseBdev4", 00:18:28.996 "uuid": "c264e240-1fa1-5859-b0d5-785f7020eb5d", 00:18:28.996 "is_configured": true, 00:18:28.996 "data_offset": 0, 00:18:28.996 "data_size": 65536 00:18:28.996 } 00:18:28.996 ] 00:18:28.996 }' 00:18:28.996 04:09:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.996 04:09:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:28.996 04:09:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.996 04:09:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.996 04:09:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:29.957 04:09:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:29.957 04:09:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:29.957 04:09:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.957 04:09:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:29.957 04:09:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:29.957 04:09:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.957 04:09:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.957 04:09:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.957 04:09:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.957 04:09:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.957 04:09:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.957 04:09:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.957 "name": "raid_bdev1", 00:18:29.957 "uuid": "8ebfe49e-40cd-4d62-b1e8-3bd75704e1ed", 00:18:29.958 "strip_size_kb": 64, 00:18:29.958 "state": "online", 00:18:29.958 "raid_level": "raid5f", 00:18:29.958 "superblock": false, 00:18:29.958 "num_base_bdevs": 4, 00:18:29.958 "num_base_bdevs_discovered": 4, 00:18:29.958 "num_base_bdevs_operational": 4, 00:18:29.958 "process": { 00:18:29.958 "type": "rebuild", 00:18:29.958 "target": "spare", 00:18:29.958 "progress": { 00:18:29.958 "blocks": 151680, 00:18:29.958 "percent": 77 00:18:29.958 } 00:18:29.958 }, 00:18:29.958 "base_bdevs_list": [ 00:18:29.958 { 00:18:29.958 "name": "spare", 00:18:29.958 "uuid": "b5c99b84-9ebd-5434-9774-dbdc73f95702", 00:18:29.958 "is_configured": true, 00:18:29.958 "data_offset": 0, 00:18:29.958 "data_size": 65536 00:18:29.958 }, 00:18:29.958 { 00:18:29.958 "name": "BaseBdev2", 00:18:29.958 "uuid": "fe095671-bb2a-5e81-9e18-4f3639595db1", 00:18:29.958 "is_configured": true, 00:18:29.958 "data_offset": 0, 00:18:29.958 "data_size": 65536 00:18:29.958 }, 00:18:29.958 { 00:18:29.958 "name": "BaseBdev3", 00:18:29.958 "uuid": "3781e6a1-e03b-521f-8e4b-795520c25494", 00:18:29.958 "is_configured": true, 00:18:29.958 "data_offset": 0, 00:18:29.958 "data_size": 65536 00:18:29.958 }, 00:18:29.958 { 00:18:29.958 "name": "BaseBdev4", 00:18:29.958 "uuid": "c264e240-1fa1-5859-b0d5-785f7020eb5d", 00:18:29.958 "is_configured": true, 00:18:29.958 "data_offset": 0, 00:18:29.958 "data_size": 65536 00:18:29.958 } 00:18:29.958 ] 00:18:29.958 }' 00:18:29.958 04:09:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.958 04:09:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:29.958 04:09:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.958 04:09:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:29.958 04:09:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:30.895 04:09:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:30.895 04:09:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.895 04:09:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.895 04:09:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.895 04:09:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.895 04:09:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.895 04:09:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.895 04:09:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.895 04:09:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.895 04:09:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.895 04:09:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.895 04:09:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.895 "name": "raid_bdev1", 00:18:30.895 "uuid": "8ebfe49e-40cd-4d62-b1e8-3bd75704e1ed", 00:18:30.895 "strip_size_kb": 64, 00:18:30.895 "state": "online", 00:18:30.895 "raid_level": "raid5f", 00:18:30.895 "superblock": false, 00:18:30.895 "num_base_bdevs": 4, 00:18:30.895 "num_base_bdevs_discovered": 4, 00:18:30.895 "num_base_bdevs_operational": 4, 00:18:30.895 "process": { 00:18:30.895 "type": "rebuild", 00:18:30.895 "target": "spare", 00:18:30.895 "progress": { 00:18:30.895 "blocks": 172800, 00:18:30.895 "percent": 87 00:18:30.896 } 00:18:30.896 }, 00:18:30.896 "base_bdevs_list": [ 00:18:30.896 { 00:18:30.896 "name": "spare", 00:18:30.896 "uuid": "b5c99b84-9ebd-5434-9774-dbdc73f95702", 00:18:30.896 "is_configured": true, 00:18:30.896 "data_offset": 0, 00:18:30.896 "data_size": 65536 00:18:30.896 }, 00:18:30.896 { 00:18:30.896 "name": "BaseBdev2", 00:18:30.896 "uuid": "fe095671-bb2a-5e81-9e18-4f3639595db1", 00:18:30.896 "is_configured": true, 00:18:30.896 "data_offset": 0, 00:18:30.896 "data_size": 65536 00:18:30.896 }, 00:18:30.896 { 00:18:30.896 "name": "BaseBdev3", 00:18:30.896 "uuid": "3781e6a1-e03b-521f-8e4b-795520c25494", 00:18:30.896 "is_configured": true, 00:18:30.896 "data_offset": 0, 00:18:30.896 "data_size": 65536 00:18:30.896 }, 00:18:30.896 { 00:18:30.896 "name": "BaseBdev4", 00:18:30.896 "uuid": "c264e240-1fa1-5859-b0d5-785f7020eb5d", 00:18:30.896 "is_configured": true, 00:18:30.896 "data_offset": 0, 00:18:30.896 "data_size": 65536 00:18:30.896 } 00:18:30.896 ] 00:18:30.896 }' 00:18:31.155 04:09:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.155 04:09:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:31.155 04:09:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.155 04:09:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:31.155 04:09:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:32.092 04:09:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:32.092 04:09:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:32.092 04:09:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.092 04:09:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:32.092 04:09:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:32.092 04:09:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.092 04:09:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.092 04:09:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.092 04:09:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.092 04:09:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.092 04:09:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.092 04:09:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.092 "name": "raid_bdev1", 00:18:32.092 "uuid": "8ebfe49e-40cd-4d62-b1e8-3bd75704e1ed", 00:18:32.092 "strip_size_kb": 64, 00:18:32.092 "state": "online", 00:18:32.092 "raid_level": "raid5f", 00:18:32.092 "superblock": false, 00:18:32.092 "num_base_bdevs": 4, 00:18:32.092 "num_base_bdevs_discovered": 4, 00:18:32.092 "num_base_bdevs_operational": 4, 00:18:32.092 "process": { 00:18:32.092 "type": "rebuild", 00:18:32.092 "target": "spare", 00:18:32.092 "progress": { 00:18:32.092 "blocks": 195840, 00:18:32.092 "percent": 99 00:18:32.092 } 00:18:32.092 }, 00:18:32.092 "base_bdevs_list": [ 00:18:32.092 { 00:18:32.092 "name": "spare", 00:18:32.092 "uuid": "b5c99b84-9ebd-5434-9774-dbdc73f95702", 00:18:32.092 "is_configured": true, 00:18:32.092 "data_offset": 0, 00:18:32.092 "data_size": 65536 00:18:32.092 }, 00:18:32.092 { 00:18:32.092 "name": "BaseBdev2", 00:18:32.092 "uuid": "fe095671-bb2a-5e81-9e18-4f3639595db1", 00:18:32.092 "is_configured": true, 00:18:32.092 "data_offset": 0, 00:18:32.092 "data_size": 65536 00:18:32.092 }, 00:18:32.092 { 00:18:32.092 "name": "BaseBdev3", 00:18:32.092 "uuid": "3781e6a1-e03b-521f-8e4b-795520c25494", 00:18:32.092 "is_configured": true, 00:18:32.092 "data_offset": 0, 00:18:32.092 "data_size": 65536 00:18:32.092 }, 00:18:32.092 { 00:18:32.092 "name": "BaseBdev4", 00:18:32.092 "uuid": "c264e240-1fa1-5859-b0d5-785f7020eb5d", 00:18:32.092 "is_configured": true, 00:18:32.092 "data_offset": 0, 00:18:32.092 "data_size": 65536 00:18:32.092 } 00:18:32.092 ] 00:18:32.092 }' 00:18:32.092 04:09:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.092 [2024-12-06 04:09:25.431609] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:32.092 [2024-12-06 04:09:25.431794] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:32.092 [2024-12-06 04:09:25.431889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.350 04:09:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:32.350 04:09:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.350 04:09:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:32.350 04:09:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:33.285 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:33.285 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:33.285 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.285 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:33.285 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:33.285 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.285 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.285 04:09:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.285 04:09:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.285 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.285 04:09:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.285 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.285 "name": "raid_bdev1", 00:18:33.285 "uuid": "8ebfe49e-40cd-4d62-b1e8-3bd75704e1ed", 00:18:33.285 "strip_size_kb": 64, 00:18:33.285 "state": "online", 00:18:33.285 "raid_level": "raid5f", 00:18:33.285 "superblock": false, 00:18:33.285 "num_base_bdevs": 4, 00:18:33.285 "num_base_bdevs_discovered": 4, 00:18:33.285 "num_base_bdevs_operational": 4, 00:18:33.285 "base_bdevs_list": [ 00:18:33.285 { 00:18:33.285 "name": "spare", 00:18:33.285 "uuid": "b5c99b84-9ebd-5434-9774-dbdc73f95702", 00:18:33.285 "is_configured": true, 00:18:33.285 "data_offset": 0, 00:18:33.285 "data_size": 65536 00:18:33.285 }, 00:18:33.285 { 00:18:33.285 "name": "BaseBdev2", 00:18:33.285 "uuid": "fe095671-bb2a-5e81-9e18-4f3639595db1", 00:18:33.285 "is_configured": true, 00:18:33.285 "data_offset": 0, 00:18:33.286 "data_size": 65536 00:18:33.286 }, 00:18:33.286 { 00:18:33.286 "name": "BaseBdev3", 00:18:33.286 "uuid": "3781e6a1-e03b-521f-8e4b-795520c25494", 00:18:33.286 "is_configured": true, 00:18:33.286 "data_offset": 0, 00:18:33.286 "data_size": 65536 00:18:33.286 }, 00:18:33.286 { 00:18:33.286 "name": "BaseBdev4", 00:18:33.286 "uuid": "c264e240-1fa1-5859-b0d5-785f7020eb5d", 00:18:33.286 "is_configured": true, 00:18:33.286 "data_offset": 0, 00:18:33.286 "data_size": 65536 00:18:33.286 } 00:18:33.286 ] 00:18:33.286 }' 00:18:33.286 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.286 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:33.286 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.286 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:33.286 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:33.286 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:33.286 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.286 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:33.286 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:33.286 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.545 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.545 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.545 04:09:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.545 04:09:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.545 04:09:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.545 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.545 "name": "raid_bdev1", 00:18:33.545 "uuid": "8ebfe49e-40cd-4d62-b1e8-3bd75704e1ed", 00:18:33.545 "strip_size_kb": 64, 00:18:33.545 "state": "online", 00:18:33.545 "raid_level": "raid5f", 00:18:33.545 "superblock": false, 00:18:33.545 "num_base_bdevs": 4, 00:18:33.545 "num_base_bdevs_discovered": 4, 00:18:33.545 "num_base_bdevs_operational": 4, 00:18:33.545 "base_bdevs_list": [ 00:18:33.545 { 00:18:33.545 "name": "spare", 00:18:33.545 "uuid": "b5c99b84-9ebd-5434-9774-dbdc73f95702", 00:18:33.545 "is_configured": true, 00:18:33.545 "data_offset": 0, 00:18:33.545 "data_size": 65536 00:18:33.545 }, 00:18:33.545 { 00:18:33.546 "name": "BaseBdev2", 00:18:33.546 "uuid": "fe095671-bb2a-5e81-9e18-4f3639595db1", 00:18:33.546 "is_configured": true, 00:18:33.546 "data_offset": 0, 00:18:33.546 "data_size": 65536 00:18:33.546 }, 00:18:33.546 { 00:18:33.546 "name": "BaseBdev3", 00:18:33.546 "uuid": "3781e6a1-e03b-521f-8e4b-795520c25494", 00:18:33.546 "is_configured": true, 00:18:33.546 "data_offset": 0, 00:18:33.546 "data_size": 65536 00:18:33.546 }, 00:18:33.546 { 00:18:33.546 "name": "BaseBdev4", 00:18:33.546 "uuid": "c264e240-1fa1-5859-b0d5-785f7020eb5d", 00:18:33.546 "is_configured": true, 00:18:33.546 "data_offset": 0, 00:18:33.546 "data_size": 65536 00:18:33.546 } 00:18:33.546 ] 00:18:33.546 }' 00:18:33.546 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.546 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:33.546 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.546 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:33.546 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:33.546 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.546 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.546 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:33.546 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:33.546 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:33.546 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.546 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.546 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.546 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.546 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.546 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.546 04:09:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.546 04:09:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.546 04:09:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.546 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.546 "name": "raid_bdev1", 00:18:33.546 "uuid": "8ebfe49e-40cd-4d62-b1e8-3bd75704e1ed", 00:18:33.546 "strip_size_kb": 64, 00:18:33.546 "state": "online", 00:18:33.546 "raid_level": "raid5f", 00:18:33.546 "superblock": false, 00:18:33.546 "num_base_bdevs": 4, 00:18:33.546 "num_base_bdevs_discovered": 4, 00:18:33.546 "num_base_bdevs_operational": 4, 00:18:33.546 "base_bdevs_list": [ 00:18:33.546 { 00:18:33.546 "name": "spare", 00:18:33.546 "uuid": "b5c99b84-9ebd-5434-9774-dbdc73f95702", 00:18:33.546 "is_configured": true, 00:18:33.546 "data_offset": 0, 00:18:33.546 "data_size": 65536 00:18:33.546 }, 00:18:33.546 { 00:18:33.546 "name": "BaseBdev2", 00:18:33.546 "uuid": "fe095671-bb2a-5e81-9e18-4f3639595db1", 00:18:33.546 "is_configured": true, 00:18:33.546 "data_offset": 0, 00:18:33.546 "data_size": 65536 00:18:33.546 }, 00:18:33.546 { 00:18:33.546 "name": "BaseBdev3", 00:18:33.546 "uuid": "3781e6a1-e03b-521f-8e4b-795520c25494", 00:18:33.546 "is_configured": true, 00:18:33.546 "data_offset": 0, 00:18:33.546 "data_size": 65536 00:18:33.546 }, 00:18:33.546 { 00:18:33.546 "name": "BaseBdev4", 00:18:33.546 "uuid": "c264e240-1fa1-5859-b0d5-785f7020eb5d", 00:18:33.546 "is_configured": true, 00:18:33.546 "data_offset": 0, 00:18:33.546 "data_size": 65536 00:18:33.546 } 00:18:33.546 ] 00:18:33.546 }' 00:18:33.546 04:09:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.546 04:09:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.117 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:34.117 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.117 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.117 [2024-12-06 04:09:27.254517] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:34.117 [2024-12-06 04:09:27.254631] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:34.117 [2024-12-06 04:09:27.254769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.117 [2024-12-06 04:09:27.254922] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:34.117 [2024-12-06 04:09:27.254984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:34.117 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.118 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.118 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:34.118 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.118 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.118 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.118 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:34.118 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:34.118 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:34.118 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:34.118 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:34.118 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:34.118 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:34.118 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:34.118 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:34.118 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:34.118 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:34.118 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:34.118 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:34.377 /dev/nbd0 00:18:34.377 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:34.377 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:34.377 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:34.377 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:34.377 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:34.377 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:34.377 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:34.377 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:34.377 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:34.377 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:34.377 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:34.377 1+0 records in 00:18:34.377 1+0 records out 00:18:34.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00083861 s, 4.9 MB/s 00:18:34.377 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:34.377 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:34.377 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:34.377 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:34.377 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:34.377 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:34.377 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:34.377 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:34.638 /dev/nbd1 00:18:34.638 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:34.638 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:34.638 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:34.638 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:34.638 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:34.638 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:34.638 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:34.638 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:34.638 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:34.638 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:34.638 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:34.638 1+0 records in 00:18:34.638 1+0 records out 00:18:34.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000655703 s, 6.2 MB/s 00:18:34.638 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:34.638 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:34.638 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:34.638 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:34.638 04:09:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:34.638 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:34.638 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:34.638 04:09:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:34.899 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:34.899 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:34.899 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:34.899 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:34.899 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:34.899 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:34.899 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:35.159 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:35.159 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:35.159 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:35.159 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:35.159 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:35.159 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:35.159 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:35.159 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:35.159 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:35.159 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:35.159 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:35.159 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:35.159 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:35.159 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:35.159 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:35.159 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:35.159 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:35.159 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:35.159 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:35.159 04:09:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84875 00:18:35.159 04:09:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84875 ']' 00:18:35.159 04:09:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84875 00:18:35.159 04:09:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:18:35.546 04:09:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.546 04:09:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84875 00:18:35.546 04:09:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:35.546 04:09:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:35.546 04:09:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84875' 00:18:35.546 killing process with pid 84875 00:18:35.546 Received shutdown signal, test time was about 60.000000 seconds 00:18:35.546 00:18:35.546 Latency(us) 00:18:35.546 [2024-12-06T04:09:28.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.546 [2024-12-06T04:09:28.900Z] =================================================================================================================== 00:18:35.546 [2024-12-06T04:09:28.900Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:35.546 04:09:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84875 00:18:35.547 [2024-12-06 04:09:28.553032] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:35.547 04:09:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84875 00:18:35.827 [2024-12-06 04:09:29.059305] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:37.207 ************************************ 00:18:37.207 END TEST raid5f_rebuild_test 00:18:37.207 ************************************ 00:18:37.207 04:09:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:37.207 00:18:37.207 real 0m20.551s 00:18:37.207 user 0m24.653s 00:18:37.207 sys 0m2.402s 00:18:37.207 04:09:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:37.207 04:09:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.207 04:09:30 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:18:37.207 04:09:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:37.207 04:09:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:37.207 04:09:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:37.207 ************************************ 00:18:37.207 START TEST raid5f_rebuild_test_sb 00:18:37.207 ************************************ 00:18:37.207 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:18:37.207 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:37.207 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:37.207 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:37.207 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:37.207 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:37.207 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85402 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85402 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85402 ']' 00:18:37.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.208 04:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.208 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:37.208 Zero copy mechanism will not be used. 00:18:37.208 [2024-12-06 04:09:30.361964] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:18:37.208 [2024-12-06 04:09:30.362124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85402 ] 00:18:37.208 [2024-12-06 04:09:30.542556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.467 [2024-12-06 04:09:30.669055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.726 [2024-12-06 04:09:30.878217] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:37.727 [2024-12-06 04:09:30.878284] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:37.987 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.987 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:37.987 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:37.987 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:37.987 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.987 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.987 BaseBdev1_malloc 00:18:37.987 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.987 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:37.987 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.987 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.987 [2024-12-06 04:09:31.316759] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:37.987 [2024-12-06 04:09:31.316833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.987 [2024-12-06 04:09:31.316858] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:37.987 [2024-12-06 04:09:31.316871] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.987 [2024-12-06 04:09:31.319283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.987 [2024-12-06 04:09:31.319328] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:37.987 BaseBdev1 00:18:37.988 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.988 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:37.988 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:37.988 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.988 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.248 BaseBdev2_malloc 00:18:38.248 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.248 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:38.248 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.248 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.248 [2024-12-06 04:09:31.375131] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:38.248 [2024-12-06 04:09:31.375228] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.248 [2024-12-06 04:09:31.375260] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:38.248 [2024-12-06 04:09:31.375274] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.248 [2024-12-06 04:09:31.378700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.248 [2024-12-06 04:09:31.378758] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:38.248 BaseBdev2 00:18:38.248 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.248 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:38.248 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:38.248 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.248 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.248 BaseBdev3_malloc 00:18:38.248 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.248 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:38.248 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.248 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.248 [2024-12-06 04:09:31.439714] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:38.248 [2024-12-06 04:09:31.439787] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.248 [2024-12-06 04:09:31.439813] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:38.248 [2024-12-06 04:09:31.439825] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.248 [2024-12-06 04:09:31.442248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.248 [2024-12-06 04:09:31.442295] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:38.248 BaseBdev3 00:18:38.248 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.248 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:38.248 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:38.248 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.248 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.248 BaseBdev4_malloc 00:18:38.248 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.248 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:38.248 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.248 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.248 [2024-12-06 04:09:31.491314] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:38.248 [2024-12-06 04:09:31.491387] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.248 [2024-12-06 04:09:31.491411] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:38.248 [2024-12-06 04:09:31.491422] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.248 [2024-12-06 04:09:31.493655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.248 [2024-12-06 04:09:31.493700] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:38.248 BaseBdev4 00:18:38.248 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.249 spare_malloc 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.249 spare_delay 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.249 [2024-12-06 04:09:31.557085] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:38.249 [2024-12-06 04:09:31.557143] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.249 [2024-12-06 04:09:31.557164] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:38.249 [2024-12-06 04:09:31.557175] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.249 [2024-12-06 04:09:31.559424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.249 [2024-12-06 04:09:31.559478] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:38.249 spare 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.249 [2024-12-06 04:09:31.569134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:38.249 [2024-12-06 04:09:31.570978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:38.249 [2024-12-06 04:09:31.571041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:38.249 [2024-12-06 04:09:31.571107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:38.249 [2024-12-06 04:09:31.571304] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:38.249 [2024-12-06 04:09:31.571331] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:38.249 [2024-12-06 04:09:31.571673] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:38.249 [2024-12-06 04:09:31.580102] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:38.249 [2024-12-06 04:09:31.580122] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:38.249 [2024-12-06 04:09:31.580325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.249 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.508 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.508 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.508 "name": "raid_bdev1", 00:18:38.508 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:38.508 "strip_size_kb": 64, 00:18:38.508 "state": "online", 00:18:38.508 "raid_level": "raid5f", 00:18:38.508 "superblock": true, 00:18:38.508 "num_base_bdevs": 4, 00:18:38.508 "num_base_bdevs_discovered": 4, 00:18:38.508 "num_base_bdevs_operational": 4, 00:18:38.508 "base_bdevs_list": [ 00:18:38.508 { 00:18:38.508 "name": "BaseBdev1", 00:18:38.508 "uuid": "b3f522e6-f30f-50c1-b591-19c394c36156", 00:18:38.508 "is_configured": true, 00:18:38.508 "data_offset": 2048, 00:18:38.508 "data_size": 63488 00:18:38.508 }, 00:18:38.508 { 00:18:38.508 "name": "BaseBdev2", 00:18:38.508 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:38.508 "is_configured": true, 00:18:38.508 "data_offset": 2048, 00:18:38.508 "data_size": 63488 00:18:38.508 }, 00:18:38.508 { 00:18:38.508 "name": "BaseBdev3", 00:18:38.508 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:38.508 "is_configured": true, 00:18:38.508 "data_offset": 2048, 00:18:38.508 "data_size": 63488 00:18:38.508 }, 00:18:38.508 { 00:18:38.508 "name": "BaseBdev4", 00:18:38.508 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:38.508 "is_configured": true, 00:18:38.508 "data_offset": 2048, 00:18:38.508 "data_size": 63488 00:18:38.508 } 00:18:38.508 ] 00:18:38.508 }' 00:18:38.508 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.508 04:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.768 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:38.768 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.768 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.768 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:38.768 [2024-12-06 04:09:32.041168] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:38.768 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.768 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:18:38.768 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.768 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.768 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.768 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:38.768 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.027 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:39.027 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:39.027 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:39.027 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:39.027 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:39.027 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:39.027 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:39.027 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:39.027 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:39.027 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:39.027 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:39.027 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:39.027 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:39.028 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:39.028 [2024-12-06 04:09:32.324538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:39.028 /dev/nbd0 00:18:39.028 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:39.028 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:39.028 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:39.028 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:39.028 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:39.028 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:39.028 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:39.028 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:39.028 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:39.028 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:39.028 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:39.028 1+0 records in 00:18:39.028 1+0 records out 00:18:39.028 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354261 s, 11.6 MB/s 00:18:39.028 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:39.287 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:39.287 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:39.287 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:39.287 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:39.288 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:39.288 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:39.288 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:39.288 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:39.288 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:39.288 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:18:39.546 496+0 records in 00:18:39.546 496+0 records out 00:18:39.546 97517568 bytes (98 MB, 93 MiB) copied, 0.440521 s, 221 MB/s 00:18:39.546 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:39.546 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:39.546 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:39.546 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:39.546 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:39.546 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:39.546 04:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:39.806 [2024-12-06 04:09:33.027703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.806 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:39.806 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.807 [2024-12-06 04:09:33.057255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.807 "name": "raid_bdev1", 00:18:39.807 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:39.807 "strip_size_kb": 64, 00:18:39.807 "state": "online", 00:18:39.807 "raid_level": "raid5f", 00:18:39.807 "superblock": true, 00:18:39.807 "num_base_bdevs": 4, 00:18:39.807 "num_base_bdevs_discovered": 3, 00:18:39.807 "num_base_bdevs_operational": 3, 00:18:39.807 "base_bdevs_list": [ 00:18:39.807 { 00:18:39.807 "name": null, 00:18:39.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.807 "is_configured": false, 00:18:39.807 "data_offset": 0, 00:18:39.807 "data_size": 63488 00:18:39.807 }, 00:18:39.807 { 00:18:39.807 "name": "BaseBdev2", 00:18:39.807 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:39.807 "is_configured": true, 00:18:39.807 "data_offset": 2048, 00:18:39.807 "data_size": 63488 00:18:39.807 }, 00:18:39.807 { 00:18:39.807 "name": "BaseBdev3", 00:18:39.807 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:39.807 "is_configured": true, 00:18:39.807 "data_offset": 2048, 00:18:39.807 "data_size": 63488 00:18:39.807 }, 00:18:39.807 { 00:18:39.807 "name": "BaseBdev4", 00:18:39.807 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:39.807 "is_configured": true, 00:18:39.807 "data_offset": 2048, 00:18:39.807 "data_size": 63488 00:18:39.807 } 00:18:39.807 ] 00:18:39.807 }' 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.807 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.377 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:40.377 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.377 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.377 [2024-12-06 04:09:33.496622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:40.377 [2024-12-06 04:09:33.512187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:18:40.377 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.377 04:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:40.377 [2024-12-06 04:09:33.521636] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:41.357 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.357 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.357 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.357 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.357 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.357 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.357 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.357 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.357 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.357 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.357 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.357 "name": "raid_bdev1", 00:18:41.357 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:41.357 "strip_size_kb": 64, 00:18:41.357 "state": "online", 00:18:41.357 "raid_level": "raid5f", 00:18:41.357 "superblock": true, 00:18:41.357 "num_base_bdevs": 4, 00:18:41.357 "num_base_bdevs_discovered": 4, 00:18:41.357 "num_base_bdevs_operational": 4, 00:18:41.357 "process": { 00:18:41.357 "type": "rebuild", 00:18:41.357 "target": "spare", 00:18:41.357 "progress": { 00:18:41.357 "blocks": 19200, 00:18:41.357 "percent": 10 00:18:41.357 } 00:18:41.357 }, 00:18:41.357 "base_bdevs_list": [ 00:18:41.357 { 00:18:41.357 "name": "spare", 00:18:41.358 "uuid": "2ea64bc3-9e75-508f-bc1c-3545bd363e19", 00:18:41.358 "is_configured": true, 00:18:41.358 "data_offset": 2048, 00:18:41.358 "data_size": 63488 00:18:41.358 }, 00:18:41.358 { 00:18:41.358 "name": "BaseBdev2", 00:18:41.358 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:41.358 "is_configured": true, 00:18:41.358 "data_offset": 2048, 00:18:41.358 "data_size": 63488 00:18:41.358 }, 00:18:41.358 { 00:18:41.358 "name": "BaseBdev3", 00:18:41.358 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:41.358 "is_configured": true, 00:18:41.358 "data_offset": 2048, 00:18:41.358 "data_size": 63488 00:18:41.358 }, 00:18:41.358 { 00:18:41.358 "name": "BaseBdev4", 00:18:41.358 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:41.358 "is_configured": true, 00:18:41.358 "data_offset": 2048, 00:18:41.358 "data_size": 63488 00:18:41.358 } 00:18:41.358 ] 00:18:41.358 }' 00:18:41.358 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.358 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.358 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.358 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.358 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:41.358 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.358 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.358 [2024-12-06 04:09:34.668469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:41.617 [2024-12-06 04:09:34.727728] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:41.617 [2024-12-06 04:09:34.727856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.617 [2024-12-06 04:09:34.727875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:41.617 [2024-12-06 04:09:34.727885] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:41.617 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.617 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:41.617 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.617 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.617 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:41.617 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:41.617 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:41.618 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.618 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.618 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.618 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.618 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.618 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.618 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.618 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.618 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.618 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.618 "name": "raid_bdev1", 00:18:41.618 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:41.618 "strip_size_kb": 64, 00:18:41.618 "state": "online", 00:18:41.618 "raid_level": "raid5f", 00:18:41.618 "superblock": true, 00:18:41.618 "num_base_bdevs": 4, 00:18:41.618 "num_base_bdevs_discovered": 3, 00:18:41.618 "num_base_bdevs_operational": 3, 00:18:41.618 "base_bdevs_list": [ 00:18:41.618 { 00:18:41.618 "name": null, 00:18:41.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.618 "is_configured": false, 00:18:41.618 "data_offset": 0, 00:18:41.618 "data_size": 63488 00:18:41.618 }, 00:18:41.618 { 00:18:41.618 "name": "BaseBdev2", 00:18:41.618 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:41.618 "is_configured": true, 00:18:41.618 "data_offset": 2048, 00:18:41.618 "data_size": 63488 00:18:41.618 }, 00:18:41.618 { 00:18:41.618 "name": "BaseBdev3", 00:18:41.618 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:41.618 "is_configured": true, 00:18:41.618 "data_offset": 2048, 00:18:41.618 "data_size": 63488 00:18:41.618 }, 00:18:41.618 { 00:18:41.618 "name": "BaseBdev4", 00:18:41.618 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:41.618 "is_configured": true, 00:18:41.618 "data_offset": 2048, 00:18:41.618 "data_size": 63488 00:18:41.618 } 00:18:41.618 ] 00:18:41.618 }' 00:18:41.618 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.618 04:09:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.878 04:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:41.878 04:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.878 04:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:41.878 04:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:41.878 04:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.878 04:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.878 04:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.878 04:09:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.878 04:09:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.878 04:09:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.878 04:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.878 "name": "raid_bdev1", 00:18:41.878 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:41.878 "strip_size_kb": 64, 00:18:41.878 "state": "online", 00:18:41.878 "raid_level": "raid5f", 00:18:41.878 "superblock": true, 00:18:41.878 "num_base_bdevs": 4, 00:18:41.878 "num_base_bdevs_discovered": 3, 00:18:41.878 "num_base_bdevs_operational": 3, 00:18:41.878 "base_bdevs_list": [ 00:18:41.878 { 00:18:41.878 "name": null, 00:18:41.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.878 "is_configured": false, 00:18:41.878 "data_offset": 0, 00:18:41.878 "data_size": 63488 00:18:41.878 }, 00:18:41.878 { 00:18:41.878 "name": "BaseBdev2", 00:18:41.878 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:41.878 "is_configured": true, 00:18:41.878 "data_offset": 2048, 00:18:41.878 "data_size": 63488 00:18:41.878 }, 00:18:41.878 { 00:18:41.878 "name": "BaseBdev3", 00:18:41.878 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:41.878 "is_configured": true, 00:18:41.878 "data_offset": 2048, 00:18:41.878 "data_size": 63488 00:18:41.878 }, 00:18:41.878 { 00:18:41.878 "name": "BaseBdev4", 00:18:41.878 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:41.878 "is_configured": true, 00:18:41.878 "data_offset": 2048, 00:18:41.878 "data_size": 63488 00:18:41.878 } 00:18:41.878 ] 00:18:41.878 }' 00:18:41.878 04:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.137 04:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:42.137 04:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.137 04:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:42.137 04:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:42.137 04:09:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.137 04:09:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.137 [2024-12-06 04:09:35.309539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:42.137 [2024-12-06 04:09:35.324116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:18:42.137 04:09:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.137 04:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:42.137 [2024-12-06 04:09:35.333344] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:43.080 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.080 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.080 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.080 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.080 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.080 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.080 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.080 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.080 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.080 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.080 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.080 "name": "raid_bdev1", 00:18:43.080 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:43.080 "strip_size_kb": 64, 00:18:43.080 "state": "online", 00:18:43.080 "raid_level": "raid5f", 00:18:43.080 "superblock": true, 00:18:43.080 "num_base_bdevs": 4, 00:18:43.080 "num_base_bdevs_discovered": 4, 00:18:43.080 "num_base_bdevs_operational": 4, 00:18:43.080 "process": { 00:18:43.080 "type": "rebuild", 00:18:43.080 "target": "spare", 00:18:43.080 "progress": { 00:18:43.080 "blocks": 19200, 00:18:43.080 "percent": 10 00:18:43.080 } 00:18:43.080 }, 00:18:43.080 "base_bdevs_list": [ 00:18:43.080 { 00:18:43.080 "name": "spare", 00:18:43.080 "uuid": "2ea64bc3-9e75-508f-bc1c-3545bd363e19", 00:18:43.080 "is_configured": true, 00:18:43.080 "data_offset": 2048, 00:18:43.080 "data_size": 63488 00:18:43.080 }, 00:18:43.080 { 00:18:43.080 "name": "BaseBdev2", 00:18:43.080 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:43.080 "is_configured": true, 00:18:43.080 "data_offset": 2048, 00:18:43.080 "data_size": 63488 00:18:43.080 }, 00:18:43.080 { 00:18:43.080 "name": "BaseBdev3", 00:18:43.080 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:43.080 "is_configured": true, 00:18:43.080 "data_offset": 2048, 00:18:43.080 "data_size": 63488 00:18:43.080 }, 00:18:43.080 { 00:18:43.080 "name": "BaseBdev4", 00:18:43.080 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:43.080 "is_configured": true, 00:18:43.080 "data_offset": 2048, 00:18:43.080 "data_size": 63488 00:18:43.080 } 00:18:43.080 ] 00:18:43.080 }' 00:18:43.080 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.342 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.342 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.342 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.342 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:43.342 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:43.342 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:43.342 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:43.342 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:43.342 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=654 00:18:43.342 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:43.342 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.342 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.342 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.342 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.342 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.342 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.342 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.342 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.342 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.342 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.342 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.342 "name": "raid_bdev1", 00:18:43.342 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:43.343 "strip_size_kb": 64, 00:18:43.343 "state": "online", 00:18:43.343 "raid_level": "raid5f", 00:18:43.343 "superblock": true, 00:18:43.343 "num_base_bdevs": 4, 00:18:43.343 "num_base_bdevs_discovered": 4, 00:18:43.343 "num_base_bdevs_operational": 4, 00:18:43.343 "process": { 00:18:43.343 "type": "rebuild", 00:18:43.343 "target": "spare", 00:18:43.343 "progress": { 00:18:43.343 "blocks": 21120, 00:18:43.343 "percent": 11 00:18:43.343 } 00:18:43.343 }, 00:18:43.343 "base_bdevs_list": [ 00:18:43.343 { 00:18:43.343 "name": "spare", 00:18:43.343 "uuid": "2ea64bc3-9e75-508f-bc1c-3545bd363e19", 00:18:43.343 "is_configured": true, 00:18:43.343 "data_offset": 2048, 00:18:43.343 "data_size": 63488 00:18:43.343 }, 00:18:43.343 { 00:18:43.343 "name": "BaseBdev2", 00:18:43.343 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:43.343 "is_configured": true, 00:18:43.343 "data_offset": 2048, 00:18:43.343 "data_size": 63488 00:18:43.343 }, 00:18:43.343 { 00:18:43.343 "name": "BaseBdev3", 00:18:43.343 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:43.343 "is_configured": true, 00:18:43.343 "data_offset": 2048, 00:18:43.343 "data_size": 63488 00:18:43.343 }, 00:18:43.343 { 00:18:43.343 "name": "BaseBdev4", 00:18:43.343 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:43.343 "is_configured": true, 00:18:43.343 "data_offset": 2048, 00:18:43.343 "data_size": 63488 00:18:43.343 } 00:18:43.343 ] 00:18:43.343 }' 00:18:43.343 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.343 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.343 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.343 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.343 04:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:44.282 04:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:44.282 04:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.282 04:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.282 04:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.282 04:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.282 04:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.282 04:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.282 04:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.282 04:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.282 04:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.543 04:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.543 04:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.543 "name": "raid_bdev1", 00:18:44.543 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:44.543 "strip_size_kb": 64, 00:18:44.543 "state": "online", 00:18:44.543 "raid_level": "raid5f", 00:18:44.543 "superblock": true, 00:18:44.543 "num_base_bdevs": 4, 00:18:44.543 "num_base_bdevs_discovered": 4, 00:18:44.543 "num_base_bdevs_operational": 4, 00:18:44.543 "process": { 00:18:44.543 "type": "rebuild", 00:18:44.543 "target": "spare", 00:18:44.543 "progress": { 00:18:44.543 "blocks": 42240, 00:18:44.543 "percent": 22 00:18:44.543 } 00:18:44.543 }, 00:18:44.543 "base_bdevs_list": [ 00:18:44.543 { 00:18:44.543 "name": "spare", 00:18:44.543 "uuid": "2ea64bc3-9e75-508f-bc1c-3545bd363e19", 00:18:44.543 "is_configured": true, 00:18:44.543 "data_offset": 2048, 00:18:44.543 "data_size": 63488 00:18:44.543 }, 00:18:44.543 { 00:18:44.543 "name": "BaseBdev2", 00:18:44.543 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:44.543 "is_configured": true, 00:18:44.543 "data_offset": 2048, 00:18:44.543 "data_size": 63488 00:18:44.543 }, 00:18:44.543 { 00:18:44.543 "name": "BaseBdev3", 00:18:44.543 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:44.543 "is_configured": true, 00:18:44.543 "data_offset": 2048, 00:18:44.543 "data_size": 63488 00:18:44.543 }, 00:18:44.543 { 00:18:44.543 "name": "BaseBdev4", 00:18:44.543 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:44.543 "is_configured": true, 00:18:44.543 "data_offset": 2048, 00:18:44.543 "data_size": 63488 00:18:44.543 } 00:18:44.543 ] 00:18:44.543 }' 00:18:44.543 04:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.543 04:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:44.543 04:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.543 04:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.543 04:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:45.483 04:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:45.483 04:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.483 04:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.483 04:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:45.483 04:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:45.483 04:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.483 04:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.483 04:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.483 04:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.483 04:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.483 04:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.483 04:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.483 "name": "raid_bdev1", 00:18:45.483 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:45.483 "strip_size_kb": 64, 00:18:45.483 "state": "online", 00:18:45.483 "raid_level": "raid5f", 00:18:45.483 "superblock": true, 00:18:45.483 "num_base_bdevs": 4, 00:18:45.483 "num_base_bdevs_discovered": 4, 00:18:45.483 "num_base_bdevs_operational": 4, 00:18:45.483 "process": { 00:18:45.483 "type": "rebuild", 00:18:45.483 "target": "spare", 00:18:45.483 "progress": { 00:18:45.483 "blocks": 65280, 00:18:45.483 "percent": 34 00:18:45.483 } 00:18:45.483 }, 00:18:45.483 "base_bdevs_list": [ 00:18:45.483 { 00:18:45.483 "name": "spare", 00:18:45.483 "uuid": "2ea64bc3-9e75-508f-bc1c-3545bd363e19", 00:18:45.483 "is_configured": true, 00:18:45.483 "data_offset": 2048, 00:18:45.483 "data_size": 63488 00:18:45.483 }, 00:18:45.483 { 00:18:45.483 "name": "BaseBdev2", 00:18:45.483 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:45.483 "is_configured": true, 00:18:45.483 "data_offset": 2048, 00:18:45.483 "data_size": 63488 00:18:45.483 }, 00:18:45.483 { 00:18:45.483 "name": "BaseBdev3", 00:18:45.483 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:45.483 "is_configured": true, 00:18:45.483 "data_offset": 2048, 00:18:45.483 "data_size": 63488 00:18:45.483 }, 00:18:45.483 { 00:18:45.483 "name": "BaseBdev4", 00:18:45.483 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:45.483 "is_configured": true, 00:18:45.483 "data_offset": 2048, 00:18:45.483 "data_size": 63488 00:18:45.483 } 00:18:45.483 ] 00:18:45.483 }' 00:18:45.483 04:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.743 04:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:45.743 04:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.743 04:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:45.743 04:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:46.683 04:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:46.683 04:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:46.683 04:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.683 04:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:46.683 04:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:46.683 04:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.683 04:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.683 04:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.683 04:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.683 04:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.683 04:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.683 04:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.683 "name": "raid_bdev1", 00:18:46.683 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:46.683 "strip_size_kb": 64, 00:18:46.683 "state": "online", 00:18:46.683 "raid_level": "raid5f", 00:18:46.683 "superblock": true, 00:18:46.683 "num_base_bdevs": 4, 00:18:46.683 "num_base_bdevs_discovered": 4, 00:18:46.683 "num_base_bdevs_operational": 4, 00:18:46.683 "process": { 00:18:46.683 "type": "rebuild", 00:18:46.683 "target": "spare", 00:18:46.683 "progress": { 00:18:46.683 "blocks": 86400, 00:18:46.683 "percent": 45 00:18:46.683 } 00:18:46.683 }, 00:18:46.683 "base_bdevs_list": [ 00:18:46.683 { 00:18:46.683 "name": "spare", 00:18:46.683 "uuid": "2ea64bc3-9e75-508f-bc1c-3545bd363e19", 00:18:46.683 "is_configured": true, 00:18:46.683 "data_offset": 2048, 00:18:46.683 "data_size": 63488 00:18:46.683 }, 00:18:46.683 { 00:18:46.683 "name": "BaseBdev2", 00:18:46.683 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:46.683 "is_configured": true, 00:18:46.683 "data_offset": 2048, 00:18:46.683 "data_size": 63488 00:18:46.683 }, 00:18:46.683 { 00:18:46.683 "name": "BaseBdev3", 00:18:46.683 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:46.683 "is_configured": true, 00:18:46.683 "data_offset": 2048, 00:18:46.683 "data_size": 63488 00:18:46.683 }, 00:18:46.683 { 00:18:46.683 "name": "BaseBdev4", 00:18:46.683 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:46.683 "is_configured": true, 00:18:46.683 "data_offset": 2048, 00:18:46.683 "data_size": 63488 00:18:46.683 } 00:18:46.683 ] 00:18:46.683 }' 00:18:46.683 04:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.683 04:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:46.683 04:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.683 04:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:46.683 04:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:48.096 04:09:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:48.096 04:09:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:48.096 04:09:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.096 04:09:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:48.096 04:09:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:48.096 04:09:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.096 04:09:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.096 04:09:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.096 04:09:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.096 04:09:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.096 04:09:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.096 04:09:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:48.096 "name": "raid_bdev1", 00:18:48.096 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:48.096 "strip_size_kb": 64, 00:18:48.096 "state": "online", 00:18:48.096 "raid_level": "raid5f", 00:18:48.096 "superblock": true, 00:18:48.096 "num_base_bdevs": 4, 00:18:48.096 "num_base_bdevs_discovered": 4, 00:18:48.096 "num_base_bdevs_operational": 4, 00:18:48.096 "process": { 00:18:48.097 "type": "rebuild", 00:18:48.097 "target": "spare", 00:18:48.097 "progress": { 00:18:48.097 "blocks": 107520, 00:18:48.097 "percent": 56 00:18:48.097 } 00:18:48.097 }, 00:18:48.097 "base_bdevs_list": [ 00:18:48.097 { 00:18:48.097 "name": "spare", 00:18:48.097 "uuid": "2ea64bc3-9e75-508f-bc1c-3545bd363e19", 00:18:48.097 "is_configured": true, 00:18:48.097 "data_offset": 2048, 00:18:48.097 "data_size": 63488 00:18:48.097 }, 00:18:48.097 { 00:18:48.097 "name": "BaseBdev2", 00:18:48.097 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:48.097 "is_configured": true, 00:18:48.097 "data_offset": 2048, 00:18:48.097 "data_size": 63488 00:18:48.097 }, 00:18:48.097 { 00:18:48.097 "name": "BaseBdev3", 00:18:48.097 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:48.097 "is_configured": true, 00:18:48.097 "data_offset": 2048, 00:18:48.097 "data_size": 63488 00:18:48.097 }, 00:18:48.097 { 00:18:48.097 "name": "BaseBdev4", 00:18:48.097 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:48.097 "is_configured": true, 00:18:48.097 "data_offset": 2048, 00:18:48.097 "data_size": 63488 00:18:48.097 } 00:18:48.097 ] 00:18:48.097 }' 00:18:48.097 04:09:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:48.097 04:09:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:48.097 04:09:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.097 04:09:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:48.097 04:09:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:49.035 04:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:49.035 04:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.035 04:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.035 04:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:49.035 04:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:49.035 04:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.035 04:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.035 04:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.035 04:09:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.035 04:09:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.035 04:09:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.035 04:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.035 "name": "raid_bdev1", 00:18:49.035 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:49.035 "strip_size_kb": 64, 00:18:49.035 "state": "online", 00:18:49.035 "raid_level": "raid5f", 00:18:49.035 "superblock": true, 00:18:49.035 "num_base_bdevs": 4, 00:18:49.035 "num_base_bdevs_discovered": 4, 00:18:49.035 "num_base_bdevs_operational": 4, 00:18:49.035 "process": { 00:18:49.035 "type": "rebuild", 00:18:49.035 "target": "spare", 00:18:49.035 "progress": { 00:18:49.035 "blocks": 130560, 00:18:49.035 "percent": 68 00:18:49.035 } 00:18:49.035 }, 00:18:49.035 "base_bdevs_list": [ 00:18:49.035 { 00:18:49.035 "name": "spare", 00:18:49.035 "uuid": "2ea64bc3-9e75-508f-bc1c-3545bd363e19", 00:18:49.035 "is_configured": true, 00:18:49.035 "data_offset": 2048, 00:18:49.035 "data_size": 63488 00:18:49.035 }, 00:18:49.035 { 00:18:49.035 "name": "BaseBdev2", 00:18:49.035 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:49.035 "is_configured": true, 00:18:49.035 "data_offset": 2048, 00:18:49.036 "data_size": 63488 00:18:49.036 }, 00:18:49.036 { 00:18:49.036 "name": "BaseBdev3", 00:18:49.036 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:49.036 "is_configured": true, 00:18:49.036 "data_offset": 2048, 00:18:49.036 "data_size": 63488 00:18:49.036 }, 00:18:49.036 { 00:18:49.036 "name": "BaseBdev4", 00:18:49.036 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:49.036 "is_configured": true, 00:18:49.036 "data_offset": 2048, 00:18:49.036 "data_size": 63488 00:18:49.036 } 00:18:49.036 ] 00:18:49.036 }' 00:18:49.036 04:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.036 04:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:49.036 04:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.036 04:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.036 04:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:49.974 04:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:49.974 04:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.974 04:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.974 04:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:49.974 04:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:49.974 04:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.974 04:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.974 04:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.974 04:09:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.974 04:09:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.234 04:09:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.234 04:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.234 "name": "raid_bdev1", 00:18:50.234 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:50.234 "strip_size_kb": 64, 00:18:50.234 "state": "online", 00:18:50.234 "raid_level": "raid5f", 00:18:50.234 "superblock": true, 00:18:50.234 "num_base_bdevs": 4, 00:18:50.234 "num_base_bdevs_discovered": 4, 00:18:50.234 "num_base_bdevs_operational": 4, 00:18:50.234 "process": { 00:18:50.234 "type": "rebuild", 00:18:50.234 "target": "spare", 00:18:50.234 "progress": { 00:18:50.234 "blocks": 151680, 00:18:50.234 "percent": 79 00:18:50.234 } 00:18:50.234 }, 00:18:50.234 "base_bdevs_list": [ 00:18:50.234 { 00:18:50.234 "name": "spare", 00:18:50.234 "uuid": "2ea64bc3-9e75-508f-bc1c-3545bd363e19", 00:18:50.234 "is_configured": true, 00:18:50.234 "data_offset": 2048, 00:18:50.234 "data_size": 63488 00:18:50.234 }, 00:18:50.234 { 00:18:50.234 "name": "BaseBdev2", 00:18:50.234 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:50.234 "is_configured": true, 00:18:50.234 "data_offset": 2048, 00:18:50.234 "data_size": 63488 00:18:50.234 }, 00:18:50.234 { 00:18:50.234 "name": "BaseBdev3", 00:18:50.234 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:50.234 "is_configured": true, 00:18:50.234 "data_offset": 2048, 00:18:50.234 "data_size": 63488 00:18:50.234 }, 00:18:50.234 { 00:18:50.234 "name": "BaseBdev4", 00:18:50.234 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:50.234 "is_configured": true, 00:18:50.234 "data_offset": 2048, 00:18:50.234 "data_size": 63488 00:18:50.234 } 00:18:50.234 ] 00:18:50.234 }' 00:18:50.234 04:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.234 04:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:50.234 04:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.234 04:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:50.234 04:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:51.169 04:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:51.169 04:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.169 04:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.169 04:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.169 04:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.169 04:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.169 04:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.169 04:09:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.169 04:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.169 04:09:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.169 04:09:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.169 04:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.169 "name": "raid_bdev1", 00:18:51.169 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:51.169 "strip_size_kb": 64, 00:18:51.169 "state": "online", 00:18:51.169 "raid_level": "raid5f", 00:18:51.169 "superblock": true, 00:18:51.169 "num_base_bdevs": 4, 00:18:51.169 "num_base_bdevs_discovered": 4, 00:18:51.169 "num_base_bdevs_operational": 4, 00:18:51.169 "process": { 00:18:51.169 "type": "rebuild", 00:18:51.169 "target": "spare", 00:18:51.169 "progress": { 00:18:51.169 "blocks": 172800, 00:18:51.169 "percent": 90 00:18:51.169 } 00:18:51.169 }, 00:18:51.169 "base_bdevs_list": [ 00:18:51.169 { 00:18:51.169 "name": "spare", 00:18:51.169 "uuid": "2ea64bc3-9e75-508f-bc1c-3545bd363e19", 00:18:51.169 "is_configured": true, 00:18:51.169 "data_offset": 2048, 00:18:51.169 "data_size": 63488 00:18:51.169 }, 00:18:51.169 { 00:18:51.169 "name": "BaseBdev2", 00:18:51.169 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:51.169 "is_configured": true, 00:18:51.169 "data_offset": 2048, 00:18:51.169 "data_size": 63488 00:18:51.169 }, 00:18:51.169 { 00:18:51.169 "name": "BaseBdev3", 00:18:51.169 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:51.169 "is_configured": true, 00:18:51.169 "data_offset": 2048, 00:18:51.169 "data_size": 63488 00:18:51.169 }, 00:18:51.169 { 00:18:51.169 "name": "BaseBdev4", 00:18:51.169 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:51.169 "is_configured": true, 00:18:51.169 "data_offset": 2048, 00:18:51.169 "data_size": 63488 00:18:51.169 } 00:18:51.169 ] 00:18:51.169 }' 00:18:51.169 04:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.428 04:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.428 04:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.428 04:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.428 04:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:52.365 [2024-12-06 04:09:45.393315] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:52.365 [2024-12-06 04:09:45.393478] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:52.365 [2024-12-06 04:09:45.393654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.365 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:52.365 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.365 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.365 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.365 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.365 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.365 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.365 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.365 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.365 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.365 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.365 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.365 "name": "raid_bdev1", 00:18:52.365 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:52.365 "strip_size_kb": 64, 00:18:52.365 "state": "online", 00:18:52.365 "raid_level": "raid5f", 00:18:52.365 "superblock": true, 00:18:52.365 "num_base_bdevs": 4, 00:18:52.365 "num_base_bdevs_discovered": 4, 00:18:52.365 "num_base_bdevs_operational": 4, 00:18:52.365 "base_bdevs_list": [ 00:18:52.365 { 00:18:52.365 "name": "spare", 00:18:52.365 "uuid": "2ea64bc3-9e75-508f-bc1c-3545bd363e19", 00:18:52.365 "is_configured": true, 00:18:52.365 "data_offset": 2048, 00:18:52.365 "data_size": 63488 00:18:52.365 }, 00:18:52.365 { 00:18:52.365 "name": "BaseBdev2", 00:18:52.365 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:52.365 "is_configured": true, 00:18:52.365 "data_offset": 2048, 00:18:52.365 "data_size": 63488 00:18:52.365 }, 00:18:52.365 { 00:18:52.365 "name": "BaseBdev3", 00:18:52.365 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:52.365 "is_configured": true, 00:18:52.365 "data_offset": 2048, 00:18:52.365 "data_size": 63488 00:18:52.365 }, 00:18:52.365 { 00:18:52.365 "name": "BaseBdev4", 00:18:52.365 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:52.365 "is_configured": true, 00:18:52.365 "data_offset": 2048, 00:18:52.365 "data_size": 63488 00:18:52.365 } 00:18:52.365 ] 00:18:52.365 }' 00:18:52.365 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.365 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:52.365 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.365 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:52.365 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:52.365 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:52.365 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.365 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:52.365 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:52.365 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.625 "name": "raid_bdev1", 00:18:52.625 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:52.625 "strip_size_kb": 64, 00:18:52.625 "state": "online", 00:18:52.625 "raid_level": "raid5f", 00:18:52.625 "superblock": true, 00:18:52.625 "num_base_bdevs": 4, 00:18:52.625 "num_base_bdevs_discovered": 4, 00:18:52.625 "num_base_bdevs_operational": 4, 00:18:52.625 "base_bdevs_list": [ 00:18:52.625 { 00:18:52.625 "name": "spare", 00:18:52.625 "uuid": "2ea64bc3-9e75-508f-bc1c-3545bd363e19", 00:18:52.625 "is_configured": true, 00:18:52.625 "data_offset": 2048, 00:18:52.625 "data_size": 63488 00:18:52.625 }, 00:18:52.625 { 00:18:52.625 "name": "BaseBdev2", 00:18:52.625 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:52.625 "is_configured": true, 00:18:52.625 "data_offset": 2048, 00:18:52.625 "data_size": 63488 00:18:52.625 }, 00:18:52.625 { 00:18:52.625 "name": "BaseBdev3", 00:18:52.625 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:52.625 "is_configured": true, 00:18:52.625 "data_offset": 2048, 00:18:52.625 "data_size": 63488 00:18:52.625 }, 00:18:52.625 { 00:18:52.625 "name": "BaseBdev4", 00:18:52.625 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:52.625 "is_configured": true, 00:18:52.625 "data_offset": 2048, 00:18:52.625 "data_size": 63488 00:18:52.625 } 00:18:52.625 ] 00:18:52.625 }' 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.625 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.625 "name": "raid_bdev1", 00:18:52.625 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:52.625 "strip_size_kb": 64, 00:18:52.625 "state": "online", 00:18:52.625 "raid_level": "raid5f", 00:18:52.625 "superblock": true, 00:18:52.625 "num_base_bdevs": 4, 00:18:52.625 "num_base_bdevs_discovered": 4, 00:18:52.625 "num_base_bdevs_operational": 4, 00:18:52.625 "base_bdevs_list": [ 00:18:52.625 { 00:18:52.625 "name": "spare", 00:18:52.625 "uuid": "2ea64bc3-9e75-508f-bc1c-3545bd363e19", 00:18:52.625 "is_configured": true, 00:18:52.625 "data_offset": 2048, 00:18:52.625 "data_size": 63488 00:18:52.625 }, 00:18:52.625 { 00:18:52.625 "name": "BaseBdev2", 00:18:52.625 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:52.625 "is_configured": true, 00:18:52.625 "data_offset": 2048, 00:18:52.625 "data_size": 63488 00:18:52.625 }, 00:18:52.625 { 00:18:52.625 "name": "BaseBdev3", 00:18:52.625 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:52.625 "is_configured": true, 00:18:52.625 "data_offset": 2048, 00:18:52.625 "data_size": 63488 00:18:52.625 }, 00:18:52.625 { 00:18:52.625 "name": "BaseBdev4", 00:18:52.626 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:52.626 "is_configured": true, 00:18:52.626 "data_offset": 2048, 00:18:52.626 "data_size": 63488 00:18:52.626 } 00:18:52.626 ] 00:18:52.626 }' 00:18:52.626 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.626 04:09:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.193 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:53.193 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.193 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.193 [2024-12-06 04:09:46.346276] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:53.193 [2024-12-06 04:09:46.346307] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:53.193 [2024-12-06 04:09:46.346387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:53.193 [2024-12-06 04:09:46.346481] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:53.193 [2024-12-06 04:09:46.346502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:53.193 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.193 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.193 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.193 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.193 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:53.193 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.193 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:53.193 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:53.193 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:53.193 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:53.193 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:53.193 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:53.193 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:53.193 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:53.193 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:53.193 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:53.193 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:53.193 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:53.193 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:53.453 /dev/nbd0 00:18:53.453 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:53.453 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:53.453 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:53.453 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:53.453 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:53.453 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:53.453 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:53.453 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:53.453 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:53.453 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:53.453 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:53.453 1+0 records in 00:18:53.453 1+0 records out 00:18:53.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334914 s, 12.2 MB/s 00:18:53.453 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:53.453 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:53.453 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:53.453 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:53.453 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:53.453 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:53.453 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:53.453 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:53.712 /dev/nbd1 00:18:53.712 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:53.712 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:53.712 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:53.712 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:53.712 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:53.712 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:53.712 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:53.712 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:53.712 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:53.712 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:53.712 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:53.712 1+0 records in 00:18:53.712 1+0 records out 00:18:53.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406687 s, 10.1 MB/s 00:18:53.712 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:53.712 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:53.712 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:53.712 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:53.712 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:53.712 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:53.712 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:53.712 04:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:53.712 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:53.712 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:53.712 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:53.712 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:53.712 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:53.712 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:53.712 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:53.970 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:53.970 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:53.970 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:53.970 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:53.970 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:53.970 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:53.970 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:53.970 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:53.970 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:53.970 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:54.230 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:54.230 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:54.230 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:54.230 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:54.230 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:54.230 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:54.230 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:54.230 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:54.230 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:54.230 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:54.230 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.230 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.230 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.230 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:54.230 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.230 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.230 [2024-12-06 04:09:47.483635] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:54.230 [2024-12-06 04:09:47.483757] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.230 [2024-12-06 04:09:47.483804] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:54.230 [2024-12-06 04:09:47.483847] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.230 [2024-12-06 04:09:47.486457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.230 [2024-12-06 04:09:47.486537] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:54.230 [2024-12-06 04:09:47.486652] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:54.230 [2024-12-06 04:09:47.486732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:54.230 [2024-12-06 04:09:47.486901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:54.230 [2024-12-06 04:09:47.487029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:54.230 [2024-12-06 04:09:47.487178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:54.230 spare 00:18:54.230 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.230 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:54.230 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.230 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.491 [2024-12-06 04:09:47.587110] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:54.491 [2024-12-06 04:09:47.587189] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:54.491 [2024-12-06 04:09:47.587488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:18:54.491 [2024-12-06 04:09:47.594767] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:54.491 [2024-12-06 04:09:47.594824] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:54.491 [2024-12-06 04:09:47.595057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.491 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.491 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:54.491 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.491 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.491 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:54.491 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:54.491 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:54.491 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.491 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.491 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.491 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.491 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.491 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.491 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.491 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.491 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.491 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.491 "name": "raid_bdev1", 00:18:54.491 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:54.491 "strip_size_kb": 64, 00:18:54.491 "state": "online", 00:18:54.491 "raid_level": "raid5f", 00:18:54.491 "superblock": true, 00:18:54.491 "num_base_bdevs": 4, 00:18:54.491 "num_base_bdevs_discovered": 4, 00:18:54.491 "num_base_bdevs_operational": 4, 00:18:54.491 "base_bdevs_list": [ 00:18:54.491 { 00:18:54.491 "name": "spare", 00:18:54.491 "uuid": "2ea64bc3-9e75-508f-bc1c-3545bd363e19", 00:18:54.491 "is_configured": true, 00:18:54.491 "data_offset": 2048, 00:18:54.491 "data_size": 63488 00:18:54.491 }, 00:18:54.491 { 00:18:54.491 "name": "BaseBdev2", 00:18:54.491 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:54.491 "is_configured": true, 00:18:54.491 "data_offset": 2048, 00:18:54.491 "data_size": 63488 00:18:54.491 }, 00:18:54.491 { 00:18:54.491 "name": "BaseBdev3", 00:18:54.491 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:54.491 "is_configured": true, 00:18:54.491 "data_offset": 2048, 00:18:54.491 "data_size": 63488 00:18:54.491 }, 00:18:54.491 { 00:18:54.491 "name": "BaseBdev4", 00:18:54.491 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:54.491 "is_configured": true, 00:18:54.491 "data_offset": 2048, 00:18:54.491 "data_size": 63488 00:18:54.491 } 00:18:54.491 ] 00:18:54.491 }' 00:18:54.491 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.491 04:09:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.751 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:54.751 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.751 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:54.751 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:54.751 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.751 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.751 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.751 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.751 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.751 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.751 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.751 "name": "raid_bdev1", 00:18:54.751 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:54.751 "strip_size_kb": 64, 00:18:54.751 "state": "online", 00:18:54.751 "raid_level": "raid5f", 00:18:54.751 "superblock": true, 00:18:54.751 "num_base_bdevs": 4, 00:18:54.751 "num_base_bdevs_discovered": 4, 00:18:54.751 "num_base_bdevs_operational": 4, 00:18:54.751 "base_bdevs_list": [ 00:18:54.751 { 00:18:54.751 "name": "spare", 00:18:54.751 "uuid": "2ea64bc3-9e75-508f-bc1c-3545bd363e19", 00:18:54.751 "is_configured": true, 00:18:54.751 "data_offset": 2048, 00:18:54.751 "data_size": 63488 00:18:54.751 }, 00:18:54.751 { 00:18:54.751 "name": "BaseBdev2", 00:18:54.751 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:54.751 "is_configured": true, 00:18:54.751 "data_offset": 2048, 00:18:54.751 "data_size": 63488 00:18:54.751 }, 00:18:54.751 { 00:18:54.751 "name": "BaseBdev3", 00:18:54.751 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:54.751 "is_configured": true, 00:18:54.751 "data_offset": 2048, 00:18:54.751 "data_size": 63488 00:18:54.751 }, 00:18:54.751 { 00:18:54.751 "name": "BaseBdev4", 00:18:54.751 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:54.751 "is_configured": true, 00:18:54.751 "data_offset": 2048, 00:18:54.751 "data_size": 63488 00:18:54.751 } 00:18:54.751 ] 00:18:54.751 }' 00:18:54.751 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.010 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:55.010 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.010 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:55.010 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.010 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.010 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.010 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:55.010 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.010 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.010 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:55.010 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.010 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.010 [2024-12-06 04:09:48.238652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:55.010 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.010 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:55.010 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.010 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.010 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:55.010 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:55.011 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:55.011 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.011 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.011 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.011 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.011 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.011 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.011 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.011 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.011 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.011 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.011 "name": "raid_bdev1", 00:18:55.011 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:55.011 "strip_size_kb": 64, 00:18:55.011 "state": "online", 00:18:55.011 "raid_level": "raid5f", 00:18:55.011 "superblock": true, 00:18:55.011 "num_base_bdevs": 4, 00:18:55.011 "num_base_bdevs_discovered": 3, 00:18:55.011 "num_base_bdevs_operational": 3, 00:18:55.011 "base_bdevs_list": [ 00:18:55.011 { 00:18:55.011 "name": null, 00:18:55.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.011 "is_configured": false, 00:18:55.011 "data_offset": 0, 00:18:55.011 "data_size": 63488 00:18:55.011 }, 00:18:55.011 { 00:18:55.011 "name": "BaseBdev2", 00:18:55.011 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:55.011 "is_configured": true, 00:18:55.011 "data_offset": 2048, 00:18:55.011 "data_size": 63488 00:18:55.011 }, 00:18:55.011 { 00:18:55.011 "name": "BaseBdev3", 00:18:55.011 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:55.011 "is_configured": true, 00:18:55.011 "data_offset": 2048, 00:18:55.011 "data_size": 63488 00:18:55.011 }, 00:18:55.011 { 00:18:55.011 "name": "BaseBdev4", 00:18:55.011 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:55.011 "is_configured": true, 00:18:55.011 "data_offset": 2048, 00:18:55.011 "data_size": 63488 00:18:55.011 } 00:18:55.011 ] 00:18:55.011 }' 00:18:55.011 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.011 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.578 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:55.578 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.578 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.578 [2024-12-06 04:09:48.729864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:55.578 [2024-12-06 04:09:48.730160] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:55.578 [2024-12-06 04:09:48.730244] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:55.578 [2024-12-06 04:09:48.730694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:55.578 [2024-12-06 04:09:48.748324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:18:55.578 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.578 04:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:55.578 [2024-12-06 04:09:48.758753] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:56.535 04:09:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.535 04:09:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.535 04:09:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:56.535 04:09:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:56.535 04:09:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.535 04:09:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.535 04:09:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.535 04:09:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.535 04:09:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.535 04:09:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.535 04:09:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.535 "name": "raid_bdev1", 00:18:56.535 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:56.535 "strip_size_kb": 64, 00:18:56.535 "state": "online", 00:18:56.535 "raid_level": "raid5f", 00:18:56.535 "superblock": true, 00:18:56.535 "num_base_bdevs": 4, 00:18:56.535 "num_base_bdevs_discovered": 4, 00:18:56.535 "num_base_bdevs_operational": 4, 00:18:56.535 "process": { 00:18:56.535 "type": "rebuild", 00:18:56.535 "target": "spare", 00:18:56.535 "progress": { 00:18:56.535 "blocks": 19200, 00:18:56.535 "percent": 10 00:18:56.535 } 00:18:56.535 }, 00:18:56.535 "base_bdevs_list": [ 00:18:56.535 { 00:18:56.535 "name": "spare", 00:18:56.535 "uuid": "2ea64bc3-9e75-508f-bc1c-3545bd363e19", 00:18:56.535 "is_configured": true, 00:18:56.535 "data_offset": 2048, 00:18:56.535 "data_size": 63488 00:18:56.535 }, 00:18:56.535 { 00:18:56.535 "name": "BaseBdev2", 00:18:56.535 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:56.535 "is_configured": true, 00:18:56.535 "data_offset": 2048, 00:18:56.535 "data_size": 63488 00:18:56.535 }, 00:18:56.535 { 00:18:56.535 "name": "BaseBdev3", 00:18:56.535 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:56.535 "is_configured": true, 00:18:56.535 "data_offset": 2048, 00:18:56.535 "data_size": 63488 00:18:56.535 }, 00:18:56.535 { 00:18:56.535 "name": "BaseBdev4", 00:18:56.535 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:56.535 "is_configured": true, 00:18:56.535 "data_offset": 2048, 00:18:56.535 "data_size": 63488 00:18:56.535 } 00:18:56.535 ] 00:18:56.535 }' 00:18:56.535 04:09:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.535 04:09:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:56.535 04:09:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.535 04:09:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:56.535 04:09:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:56.535 04:09:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.535 04:09:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.535 [2024-12-06 04:09:49.881823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:56.794 [2024-12-06 04:09:49.967495] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:56.794 [2024-12-06 04:09:49.967979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.794 [2024-12-06 04:09:49.968038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:56.794 [2024-12-06 04:09:49.968064] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:56.794 04:09:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.794 04:09:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:56.794 04:09:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.794 04:09:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.794 04:09:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:56.794 04:09:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:56.794 04:09:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:56.794 04:09:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.794 04:09:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.794 04:09:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.794 04:09:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.794 04:09:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.794 04:09:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.794 04:09:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.794 04:09:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.794 04:09:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.794 04:09:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.794 "name": "raid_bdev1", 00:18:56.794 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:56.794 "strip_size_kb": 64, 00:18:56.794 "state": "online", 00:18:56.794 "raid_level": "raid5f", 00:18:56.794 "superblock": true, 00:18:56.794 "num_base_bdevs": 4, 00:18:56.794 "num_base_bdevs_discovered": 3, 00:18:56.794 "num_base_bdevs_operational": 3, 00:18:56.794 "base_bdevs_list": [ 00:18:56.794 { 00:18:56.794 "name": null, 00:18:56.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.794 "is_configured": false, 00:18:56.794 "data_offset": 0, 00:18:56.794 "data_size": 63488 00:18:56.794 }, 00:18:56.794 { 00:18:56.794 "name": "BaseBdev2", 00:18:56.794 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:56.794 "is_configured": true, 00:18:56.794 "data_offset": 2048, 00:18:56.794 "data_size": 63488 00:18:56.794 }, 00:18:56.794 { 00:18:56.794 "name": "BaseBdev3", 00:18:56.794 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:56.794 "is_configured": true, 00:18:56.794 "data_offset": 2048, 00:18:56.794 "data_size": 63488 00:18:56.794 }, 00:18:56.794 { 00:18:56.794 "name": "BaseBdev4", 00:18:56.794 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:56.794 "is_configured": true, 00:18:56.794 "data_offset": 2048, 00:18:56.794 "data_size": 63488 00:18:56.794 } 00:18:56.794 ] 00:18:56.794 }' 00:18:56.794 04:09:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.794 04:09:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.363 04:09:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:57.363 04:09:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.363 04:09:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.363 [2024-12-06 04:09:50.434470] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:57.363 [2024-12-06 04:09:50.434781] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.363 [2024-12-06 04:09:50.434900] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:57.363 [2024-12-06 04:09:50.434993] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.363 [2024-12-06 04:09:50.435679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.363 [2024-12-06 04:09:50.435843] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:57.364 [2024-12-06 04:09:50.436062] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:57.364 [2024-12-06 04:09:50.436120] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:57.364 [2024-12-06 04:09:50.436169] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:57.364 [2024-12-06 04:09:50.436295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:57.364 [2024-12-06 04:09:50.452497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:18:57.364 spare 00:18:57.364 04:09:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.364 04:09:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:57.364 [2024-12-06 04:09:50.462249] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:58.302 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:58.302 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.302 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:58.302 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:58.302 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.302 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.302 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.302 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.302 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.302 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.302 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:58.302 "name": "raid_bdev1", 00:18:58.302 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:58.302 "strip_size_kb": 64, 00:18:58.302 "state": "online", 00:18:58.302 "raid_level": "raid5f", 00:18:58.302 "superblock": true, 00:18:58.302 "num_base_bdevs": 4, 00:18:58.302 "num_base_bdevs_discovered": 4, 00:18:58.302 "num_base_bdevs_operational": 4, 00:18:58.302 "process": { 00:18:58.302 "type": "rebuild", 00:18:58.302 "target": "spare", 00:18:58.302 "progress": { 00:18:58.302 "blocks": 19200, 00:18:58.302 "percent": 10 00:18:58.302 } 00:18:58.302 }, 00:18:58.302 "base_bdevs_list": [ 00:18:58.302 { 00:18:58.302 "name": "spare", 00:18:58.302 "uuid": "2ea64bc3-9e75-508f-bc1c-3545bd363e19", 00:18:58.302 "is_configured": true, 00:18:58.302 "data_offset": 2048, 00:18:58.302 "data_size": 63488 00:18:58.302 }, 00:18:58.302 { 00:18:58.302 "name": "BaseBdev2", 00:18:58.302 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:58.302 "is_configured": true, 00:18:58.302 "data_offset": 2048, 00:18:58.302 "data_size": 63488 00:18:58.302 }, 00:18:58.302 { 00:18:58.302 "name": "BaseBdev3", 00:18:58.302 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:58.302 "is_configured": true, 00:18:58.302 "data_offset": 2048, 00:18:58.302 "data_size": 63488 00:18:58.302 }, 00:18:58.302 { 00:18:58.302 "name": "BaseBdev4", 00:18:58.302 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:58.302 "is_configured": true, 00:18:58.302 "data_offset": 2048, 00:18:58.302 "data_size": 63488 00:18:58.302 } 00:18:58.302 ] 00:18:58.302 }' 00:18:58.302 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:58.302 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:58.302 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:58.302 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:58.302 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:58.302 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.302 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.302 [2024-12-06 04:09:51.605712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:58.563 [2024-12-06 04:09:51.668863] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:58.563 [2024-12-06 04:09:51.669397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.563 [2024-12-06 04:09:51.669433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:58.563 [2024-12-06 04:09:51.669443] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:58.563 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.563 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:58.563 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:58.563 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.563 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:58.563 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:58.563 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:58.563 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.563 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.563 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.563 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.563 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.563 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.563 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.563 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.563 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.563 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.563 "name": "raid_bdev1", 00:18:58.563 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:58.563 "strip_size_kb": 64, 00:18:58.563 "state": "online", 00:18:58.563 "raid_level": "raid5f", 00:18:58.563 "superblock": true, 00:18:58.563 "num_base_bdevs": 4, 00:18:58.563 "num_base_bdevs_discovered": 3, 00:18:58.563 "num_base_bdevs_operational": 3, 00:18:58.563 "base_bdevs_list": [ 00:18:58.563 { 00:18:58.563 "name": null, 00:18:58.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.563 "is_configured": false, 00:18:58.563 "data_offset": 0, 00:18:58.563 "data_size": 63488 00:18:58.563 }, 00:18:58.563 { 00:18:58.563 "name": "BaseBdev2", 00:18:58.563 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:58.563 "is_configured": true, 00:18:58.563 "data_offset": 2048, 00:18:58.563 "data_size": 63488 00:18:58.563 }, 00:18:58.563 { 00:18:58.563 "name": "BaseBdev3", 00:18:58.563 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:58.563 "is_configured": true, 00:18:58.563 "data_offset": 2048, 00:18:58.563 "data_size": 63488 00:18:58.563 }, 00:18:58.563 { 00:18:58.563 "name": "BaseBdev4", 00:18:58.563 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:58.563 "is_configured": true, 00:18:58.563 "data_offset": 2048, 00:18:58.563 "data_size": 63488 00:18:58.563 } 00:18:58.563 ] 00:18:58.563 }' 00:18:58.563 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.563 04:09:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.823 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:58.823 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.823 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:58.823 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:58.823 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.823 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.823 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.823 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.823 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.823 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.823 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:58.823 "name": "raid_bdev1", 00:18:58.823 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:18:58.823 "strip_size_kb": 64, 00:18:58.823 "state": "online", 00:18:58.823 "raid_level": "raid5f", 00:18:58.823 "superblock": true, 00:18:58.823 "num_base_bdevs": 4, 00:18:58.823 "num_base_bdevs_discovered": 3, 00:18:58.823 "num_base_bdevs_operational": 3, 00:18:58.823 "base_bdevs_list": [ 00:18:58.823 { 00:18:58.823 "name": null, 00:18:58.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.823 "is_configured": false, 00:18:58.823 "data_offset": 0, 00:18:58.823 "data_size": 63488 00:18:58.823 }, 00:18:58.823 { 00:18:58.823 "name": "BaseBdev2", 00:18:58.823 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:18:58.823 "is_configured": true, 00:18:58.823 "data_offset": 2048, 00:18:58.823 "data_size": 63488 00:18:58.823 }, 00:18:58.823 { 00:18:58.823 "name": "BaseBdev3", 00:18:58.823 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:18:58.823 "is_configured": true, 00:18:58.823 "data_offset": 2048, 00:18:58.823 "data_size": 63488 00:18:58.823 }, 00:18:58.823 { 00:18:58.823 "name": "BaseBdev4", 00:18:58.823 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:18:58.823 "is_configured": true, 00:18:58.823 "data_offset": 2048, 00:18:58.823 "data_size": 63488 00:18:58.823 } 00:18:58.823 ] 00:18:58.823 }' 00:18:58.823 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.083 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:59.083 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.083 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:59.083 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:59.083 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.083 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.083 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.083 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:59.083 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.083 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.083 [2024-12-06 04:09:52.257988] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:59.083 [2024-12-06 04:09:52.258234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.083 [2024-12-06 04:09:52.258321] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:18:59.083 [2024-12-06 04:09:52.258397] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.083 [2024-12-06 04:09:52.258899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.083 [2024-12-06 04:09:52.259025] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:59.083 [2024-12-06 04:09:52.259196] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:59.083 [2024-12-06 04:09:52.259245] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:59.083 [2024-12-06 04:09:52.259290] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:59.083 [2024-12-06 04:09:52.259318] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:59.083 BaseBdev1 00:18:59.083 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.083 04:09:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:00.023 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:00.023 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:00.023 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.023 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:00.023 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:00.023 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:00.023 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.023 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.023 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.023 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.023 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.023 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.023 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.023 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.023 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.023 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.023 "name": "raid_bdev1", 00:19:00.023 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:19:00.023 "strip_size_kb": 64, 00:19:00.023 "state": "online", 00:19:00.023 "raid_level": "raid5f", 00:19:00.023 "superblock": true, 00:19:00.023 "num_base_bdevs": 4, 00:19:00.023 "num_base_bdevs_discovered": 3, 00:19:00.023 "num_base_bdevs_operational": 3, 00:19:00.023 "base_bdevs_list": [ 00:19:00.023 { 00:19:00.023 "name": null, 00:19:00.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.023 "is_configured": false, 00:19:00.023 "data_offset": 0, 00:19:00.023 "data_size": 63488 00:19:00.023 }, 00:19:00.023 { 00:19:00.023 "name": "BaseBdev2", 00:19:00.023 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:19:00.023 "is_configured": true, 00:19:00.023 "data_offset": 2048, 00:19:00.023 "data_size": 63488 00:19:00.023 }, 00:19:00.023 { 00:19:00.023 "name": "BaseBdev3", 00:19:00.023 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:19:00.023 "is_configured": true, 00:19:00.023 "data_offset": 2048, 00:19:00.023 "data_size": 63488 00:19:00.023 }, 00:19:00.023 { 00:19:00.023 "name": "BaseBdev4", 00:19:00.023 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:19:00.023 "is_configured": true, 00:19:00.023 "data_offset": 2048, 00:19:00.023 "data_size": 63488 00:19:00.023 } 00:19:00.023 ] 00:19:00.023 }' 00:19:00.023 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.023 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.283 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:00.283 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:00.283 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:00.283 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:00.283 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:00.542 "name": "raid_bdev1", 00:19:00.542 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:19:00.542 "strip_size_kb": 64, 00:19:00.542 "state": "online", 00:19:00.542 "raid_level": "raid5f", 00:19:00.542 "superblock": true, 00:19:00.542 "num_base_bdevs": 4, 00:19:00.542 "num_base_bdevs_discovered": 3, 00:19:00.542 "num_base_bdevs_operational": 3, 00:19:00.542 "base_bdevs_list": [ 00:19:00.542 { 00:19:00.542 "name": null, 00:19:00.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.542 "is_configured": false, 00:19:00.542 "data_offset": 0, 00:19:00.542 "data_size": 63488 00:19:00.542 }, 00:19:00.542 { 00:19:00.542 "name": "BaseBdev2", 00:19:00.542 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:19:00.542 "is_configured": true, 00:19:00.542 "data_offset": 2048, 00:19:00.542 "data_size": 63488 00:19:00.542 }, 00:19:00.542 { 00:19:00.542 "name": "BaseBdev3", 00:19:00.542 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:19:00.542 "is_configured": true, 00:19:00.542 "data_offset": 2048, 00:19:00.542 "data_size": 63488 00:19:00.542 }, 00:19:00.542 { 00:19:00.542 "name": "BaseBdev4", 00:19:00.542 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:19:00.542 "is_configured": true, 00:19:00.542 "data_offset": 2048, 00:19:00.542 "data_size": 63488 00:19:00.542 } 00:19:00.542 ] 00:19:00.542 }' 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.542 [2024-12-06 04:09:53.803663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:00.542 [2024-12-06 04:09:53.803845] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:00.542 [2024-12-06 04:09:53.803862] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:00.542 request: 00:19:00.542 { 00:19:00.542 "base_bdev": "BaseBdev1", 00:19:00.542 "raid_bdev": "raid_bdev1", 00:19:00.542 "method": "bdev_raid_add_base_bdev", 00:19:00.542 "req_id": 1 00:19:00.542 } 00:19:00.542 Got JSON-RPC error response 00:19:00.542 response: 00:19:00.542 { 00:19:00.542 "code": -22, 00:19:00.542 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:00.542 } 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:00.542 04:09:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:01.482 04:09:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:01.482 04:09:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.482 04:09:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.482 04:09:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:01.482 04:09:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:01.482 04:09:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:01.482 04:09:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.482 04:09:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.482 04:09:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.482 04:09:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.482 04:09:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.482 04:09:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.482 04:09:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.482 04:09:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.742 04:09:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.742 04:09:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.742 "name": "raid_bdev1", 00:19:01.742 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:19:01.742 "strip_size_kb": 64, 00:19:01.742 "state": "online", 00:19:01.742 "raid_level": "raid5f", 00:19:01.742 "superblock": true, 00:19:01.742 "num_base_bdevs": 4, 00:19:01.742 "num_base_bdevs_discovered": 3, 00:19:01.742 "num_base_bdevs_operational": 3, 00:19:01.742 "base_bdevs_list": [ 00:19:01.742 { 00:19:01.742 "name": null, 00:19:01.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.742 "is_configured": false, 00:19:01.742 "data_offset": 0, 00:19:01.742 "data_size": 63488 00:19:01.742 }, 00:19:01.742 { 00:19:01.742 "name": "BaseBdev2", 00:19:01.742 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:19:01.742 "is_configured": true, 00:19:01.742 "data_offset": 2048, 00:19:01.742 "data_size": 63488 00:19:01.742 }, 00:19:01.742 { 00:19:01.742 "name": "BaseBdev3", 00:19:01.742 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:19:01.742 "is_configured": true, 00:19:01.742 "data_offset": 2048, 00:19:01.742 "data_size": 63488 00:19:01.742 }, 00:19:01.742 { 00:19:01.742 "name": "BaseBdev4", 00:19:01.742 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:19:01.742 "is_configured": true, 00:19:01.742 "data_offset": 2048, 00:19:01.742 "data_size": 63488 00:19:01.742 } 00:19:01.742 ] 00:19:01.742 }' 00:19:01.742 04:09:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.742 04:09:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.009 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:02.009 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:02.009 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:02.009 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:02.009 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:02.009 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.009 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.009 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.009 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.009 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.009 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:02.009 "name": "raid_bdev1", 00:19:02.009 "uuid": "4f8f5184-6641-4036-9089-31b144cea174", 00:19:02.009 "strip_size_kb": 64, 00:19:02.009 "state": "online", 00:19:02.009 "raid_level": "raid5f", 00:19:02.009 "superblock": true, 00:19:02.009 "num_base_bdevs": 4, 00:19:02.009 "num_base_bdevs_discovered": 3, 00:19:02.009 "num_base_bdevs_operational": 3, 00:19:02.009 "base_bdevs_list": [ 00:19:02.009 { 00:19:02.009 "name": null, 00:19:02.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.009 "is_configured": false, 00:19:02.009 "data_offset": 0, 00:19:02.009 "data_size": 63488 00:19:02.009 }, 00:19:02.009 { 00:19:02.010 "name": "BaseBdev2", 00:19:02.010 "uuid": "fb0af729-7761-5ac2-829d-2e7f51223e45", 00:19:02.010 "is_configured": true, 00:19:02.010 "data_offset": 2048, 00:19:02.010 "data_size": 63488 00:19:02.010 }, 00:19:02.010 { 00:19:02.010 "name": "BaseBdev3", 00:19:02.010 "uuid": "d7d340cc-9a97-5792-b361-bfe6d976eb72", 00:19:02.010 "is_configured": true, 00:19:02.010 "data_offset": 2048, 00:19:02.010 "data_size": 63488 00:19:02.010 }, 00:19:02.010 { 00:19:02.010 "name": "BaseBdev4", 00:19:02.010 "uuid": "d316d6b3-a3ea-552e-bfd7-ee5ed02094ef", 00:19:02.010 "is_configured": true, 00:19:02.010 "data_offset": 2048, 00:19:02.010 "data_size": 63488 00:19:02.010 } 00:19:02.010 ] 00:19:02.010 }' 00:19:02.010 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:02.010 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:02.010 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:02.010 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:02.010 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85402 00:19:02.010 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85402 ']' 00:19:02.010 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85402 00:19:02.010 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:02.010 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.010 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85402 00:19:02.268 killing process with pid 85402 00:19:02.268 Received shutdown signal, test time was about 60.000000 seconds 00:19:02.268 00:19:02.268 Latency(us) 00:19:02.268 [2024-12-06T04:09:55.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.268 [2024-12-06T04:09:55.622Z] =================================================================================================================== 00:19:02.268 [2024-12-06T04:09:55.622Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:02.268 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:02.268 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:02.268 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85402' 00:19:02.268 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85402 00:19:02.268 [2024-12-06 04:09:55.371150] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:02.268 [2024-12-06 04:09:55.371272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:02.268 [2024-12-06 04:09:55.371350] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:02.268 [2024-12-06 04:09:55.371363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:02.268 04:09:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85402 00:19:02.527 [2024-12-06 04:09:55.850772] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:03.907 04:09:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:03.907 00:19:03.907 real 0m26.682s 00:19:03.907 user 0m33.451s 00:19:03.907 sys 0m2.794s 00:19:03.907 04:09:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.907 ************************************ 00:19:03.907 END TEST raid5f_rebuild_test_sb 00:19:03.907 ************************************ 00:19:03.907 04:09:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.907 04:09:56 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:19:03.907 04:09:56 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:19:03.907 04:09:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:03.907 04:09:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.907 04:09:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:03.907 ************************************ 00:19:03.907 START TEST raid_state_function_test_sb_4k 00:19:03.907 ************************************ 00:19:03.907 04:09:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:03.907 04:09:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:03.907 04:09:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:03.907 04:09:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:03.907 04:09:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:03.907 04:09:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:03.907 Process raid pid: 86208 00:19:03.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86208 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86208' 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86208 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86208 ']' 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.907 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:03.907 [2024-12-06 04:09:57.088763] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:19:03.907 [2024-12-06 04:09:57.088957] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.907 [2024-12-06 04:09:57.246250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.166 [2024-12-06 04:09:57.361895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.426 [2024-12-06 04:09:57.559374] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:04.426 [2024-12-06 04:09:57.559489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.686 [2024-12-06 04:09:57.914995] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:04.686 [2024-12-06 04:09:57.915114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:04.686 [2024-12-06 04:09:57.915145] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:04.686 [2024-12-06 04:09:57.915169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.686 "name": "Existed_Raid", 00:19:04.686 "uuid": "104d3394-f404-44c0-a134-6f788c403d0b", 00:19:04.686 "strip_size_kb": 0, 00:19:04.686 "state": "configuring", 00:19:04.686 "raid_level": "raid1", 00:19:04.686 "superblock": true, 00:19:04.686 "num_base_bdevs": 2, 00:19:04.686 "num_base_bdevs_discovered": 0, 00:19:04.686 "num_base_bdevs_operational": 2, 00:19:04.686 "base_bdevs_list": [ 00:19:04.686 { 00:19:04.686 "name": "BaseBdev1", 00:19:04.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.686 "is_configured": false, 00:19:04.686 "data_offset": 0, 00:19:04.686 "data_size": 0 00:19:04.686 }, 00:19:04.686 { 00:19:04.686 "name": "BaseBdev2", 00:19:04.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.686 "is_configured": false, 00:19:04.686 "data_offset": 0, 00:19:04.686 "data_size": 0 00:19:04.686 } 00:19:04.686 ] 00:19:04.686 }' 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.686 04:09:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.257 [2024-12-06 04:09:58.362205] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:05.257 [2024-12-06 04:09:58.362245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.257 [2024-12-06 04:09:58.374200] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:05.257 [2024-12-06 04:09:58.374312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:05.257 [2024-12-06 04:09:58.374342] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:05.257 [2024-12-06 04:09:58.374354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.257 [2024-12-06 04:09:58.421576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:05.257 BaseBdev1 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.257 [ 00:19:05.257 { 00:19:05.257 "name": "BaseBdev1", 00:19:05.257 "aliases": [ 00:19:05.257 "601f0d56-ba0a-4f8e-b2df-82082eca3513" 00:19:05.257 ], 00:19:05.257 "product_name": "Malloc disk", 00:19:05.257 "block_size": 4096, 00:19:05.257 "num_blocks": 8192, 00:19:05.257 "uuid": "601f0d56-ba0a-4f8e-b2df-82082eca3513", 00:19:05.257 "assigned_rate_limits": { 00:19:05.257 "rw_ios_per_sec": 0, 00:19:05.257 "rw_mbytes_per_sec": 0, 00:19:05.257 "r_mbytes_per_sec": 0, 00:19:05.257 "w_mbytes_per_sec": 0 00:19:05.257 }, 00:19:05.257 "claimed": true, 00:19:05.257 "claim_type": "exclusive_write", 00:19:05.257 "zoned": false, 00:19:05.257 "supported_io_types": { 00:19:05.257 "read": true, 00:19:05.257 "write": true, 00:19:05.257 "unmap": true, 00:19:05.257 "flush": true, 00:19:05.257 "reset": true, 00:19:05.257 "nvme_admin": false, 00:19:05.257 "nvme_io": false, 00:19:05.257 "nvme_io_md": false, 00:19:05.257 "write_zeroes": true, 00:19:05.257 "zcopy": true, 00:19:05.257 "get_zone_info": false, 00:19:05.257 "zone_management": false, 00:19:05.257 "zone_append": false, 00:19:05.257 "compare": false, 00:19:05.257 "compare_and_write": false, 00:19:05.257 "abort": true, 00:19:05.257 "seek_hole": false, 00:19:05.257 "seek_data": false, 00:19:05.257 "copy": true, 00:19:05.257 "nvme_iov_md": false 00:19:05.257 }, 00:19:05.257 "memory_domains": [ 00:19:05.257 { 00:19:05.257 "dma_device_id": "system", 00:19:05.257 "dma_device_type": 1 00:19:05.257 }, 00:19:05.257 { 00:19:05.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.257 "dma_device_type": 2 00:19:05.257 } 00:19:05.257 ], 00:19:05.257 "driver_specific": {} 00:19:05.257 } 00:19:05.257 ] 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.257 "name": "Existed_Raid", 00:19:05.257 "uuid": "32ec4504-d241-4297-b6ea-25e5960f9eac", 00:19:05.257 "strip_size_kb": 0, 00:19:05.257 "state": "configuring", 00:19:05.257 "raid_level": "raid1", 00:19:05.257 "superblock": true, 00:19:05.257 "num_base_bdevs": 2, 00:19:05.257 "num_base_bdevs_discovered": 1, 00:19:05.257 "num_base_bdevs_operational": 2, 00:19:05.257 "base_bdevs_list": [ 00:19:05.257 { 00:19:05.257 "name": "BaseBdev1", 00:19:05.257 "uuid": "601f0d56-ba0a-4f8e-b2df-82082eca3513", 00:19:05.257 "is_configured": true, 00:19:05.257 "data_offset": 256, 00:19:05.257 "data_size": 7936 00:19:05.257 }, 00:19:05.257 { 00:19:05.257 "name": "BaseBdev2", 00:19:05.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.257 "is_configured": false, 00:19:05.257 "data_offset": 0, 00:19:05.257 "data_size": 0 00:19:05.257 } 00:19:05.257 ] 00:19:05.257 }' 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.257 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.828 [2024-12-06 04:09:58.916766] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:05.828 [2024-12-06 04:09:58.916873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.828 [2024-12-06 04:09:58.928788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:05.828 [2024-12-06 04:09:58.930612] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:05.828 [2024-12-06 04:09:58.930686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.828 "name": "Existed_Raid", 00:19:05.828 "uuid": "b5efb5fa-4363-452d-9257-8e26672ad65e", 00:19:05.828 "strip_size_kb": 0, 00:19:05.828 "state": "configuring", 00:19:05.828 "raid_level": "raid1", 00:19:05.828 "superblock": true, 00:19:05.828 "num_base_bdevs": 2, 00:19:05.828 "num_base_bdevs_discovered": 1, 00:19:05.828 "num_base_bdevs_operational": 2, 00:19:05.828 "base_bdevs_list": [ 00:19:05.828 { 00:19:05.828 "name": "BaseBdev1", 00:19:05.828 "uuid": "601f0d56-ba0a-4f8e-b2df-82082eca3513", 00:19:05.828 "is_configured": true, 00:19:05.828 "data_offset": 256, 00:19:05.828 "data_size": 7936 00:19:05.828 }, 00:19:05.828 { 00:19:05.828 "name": "BaseBdev2", 00:19:05.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.828 "is_configured": false, 00:19:05.828 "data_offset": 0, 00:19:05.828 "data_size": 0 00:19:05.828 } 00:19:05.828 ] 00:19:05.828 }' 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.828 04:09:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.089 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:19:06.089 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.089 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.089 [2024-12-06 04:09:59.384182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:06.089 [2024-12-06 04:09:59.384486] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:06.089 [2024-12-06 04:09:59.384504] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:06.089 [2024-12-06 04:09:59.384757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:06.089 [2024-12-06 04:09:59.384932] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:06.089 [2024-12-06 04:09:59.384945] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:06.089 [2024-12-06 04:09:59.385107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.089 BaseBdev2 00:19:06.089 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.089 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:06.089 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:06.089 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:06.089 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:19:06.089 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:06.089 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:06.089 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:06.089 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.089 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.089 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.089 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:06.089 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.089 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.089 [ 00:19:06.089 { 00:19:06.089 "name": "BaseBdev2", 00:19:06.089 "aliases": [ 00:19:06.089 "a8f669e3-7dd0-4c3d-b65d-d1243c4e2867" 00:19:06.089 ], 00:19:06.089 "product_name": "Malloc disk", 00:19:06.089 "block_size": 4096, 00:19:06.089 "num_blocks": 8192, 00:19:06.089 "uuid": "a8f669e3-7dd0-4c3d-b65d-d1243c4e2867", 00:19:06.089 "assigned_rate_limits": { 00:19:06.089 "rw_ios_per_sec": 0, 00:19:06.089 "rw_mbytes_per_sec": 0, 00:19:06.089 "r_mbytes_per_sec": 0, 00:19:06.089 "w_mbytes_per_sec": 0 00:19:06.089 }, 00:19:06.089 "claimed": true, 00:19:06.089 "claim_type": "exclusive_write", 00:19:06.089 "zoned": false, 00:19:06.089 "supported_io_types": { 00:19:06.089 "read": true, 00:19:06.089 "write": true, 00:19:06.089 "unmap": true, 00:19:06.089 "flush": true, 00:19:06.089 "reset": true, 00:19:06.089 "nvme_admin": false, 00:19:06.089 "nvme_io": false, 00:19:06.089 "nvme_io_md": false, 00:19:06.089 "write_zeroes": true, 00:19:06.089 "zcopy": true, 00:19:06.089 "get_zone_info": false, 00:19:06.089 "zone_management": false, 00:19:06.089 "zone_append": false, 00:19:06.089 "compare": false, 00:19:06.089 "compare_and_write": false, 00:19:06.089 "abort": true, 00:19:06.089 "seek_hole": false, 00:19:06.089 "seek_data": false, 00:19:06.089 "copy": true, 00:19:06.089 "nvme_iov_md": false 00:19:06.089 }, 00:19:06.089 "memory_domains": [ 00:19:06.089 { 00:19:06.089 "dma_device_id": "system", 00:19:06.089 "dma_device_type": 1 00:19:06.089 }, 00:19:06.089 { 00:19:06.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.089 "dma_device_type": 2 00:19:06.089 } 00:19:06.089 ], 00:19:06.089 "driver_specific": {} 00:19:06.089 } 00:19:06.089 ] 00:19:06.090 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.090 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:19:06.090 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:06.090 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:06.090 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:06.090 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:06.090 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.090 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.090 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.090 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:06.090 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.090 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.090 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.090 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.090 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.090 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.090 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.090 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.349 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.349 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.349 "name": "Existed_Raid", 00:19:06.349 "uuid": "b5efb5fa-4363-452d-9257-8e26672ad65e", 00:19:06.349 "strip_size_kb": 0, 00:19:06.349 "state": "online", 00:19:06.349 "raid_level": "raid1", 00:19:06.349 "superblock": true, 00:19:06.349 "num_base_bdevs": 2, 00:19:06.349 "num_base_bdevs_discovered": 2, 00:19:06.349 "num_base_bdevs_operational": 2, 00:19:06.349 "base_bdevs_list": [ 00:19:06.349 { 00:19:06.349 "name": "BaseBdev1", 00:19:06.349 "uuid": "601f0d56-ba0a-4f8e-b2df-82082eca3513", 00:19:06.349 "is_configured": true, 00:19:06.349 "data_offset": 256, 00:19:06.349 "data_size": 7936 00:19:06.349 }, 00:19:06.349 { 00:19:06.349 "name": "BaseBdev2", 00:19:06.349 "uuid": "a8f669e3-7dd0-4c3d-b65d-d1243c4e2867", 00:19:06.349 "is_configured": true, 00:19:06.349 "data_offset": 256, 00:19:06.349 "data_size": 7936 00:19:06.349 } 00:19:06.349 ] 00:19:06.350 }' 00:19:06.350 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.350 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.610 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:06.610 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:06.610 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:06.610 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:06.610 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:06.610 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:06.610 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:06.610 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:06.610 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.610 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.610 [2024-12-06 04:09:59.875693] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.610 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.610 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:06.610 "name": "Existed_Raid", 00:19:06.610 "aliases": [ 00:19:06.610 "b5efb5fa-4363-452d-9257-8e26672ad65e" 00:19:06.610 ], 00:19:06.610 "product_name": "Raid Volume", 00:19:06.610 "block_size": 4096, 00:19:06.610 "num_blocks": 7936, 00:19:06.610 "uuid": "b5efb5fa-4363-452d-9257-8e26672ad65e", 00:19:06.610 "assigned_rate_limits": { 00:19:06.610 "rw_ios_per_sec": 0, 00:19:06.610 "rw_mbytes_per_sec": 0, 00:19:06.610 "r_mbytes_per_sec": 0, 00:19:06.610 "w_mbytes_per_sec": 0 00:19:06.610 }, 00:19:06.610 "claimed": false, 00:19:06.610 "zoned": false, 00:19:06.610 "supported_io_types": { 00:19:06.610 "read": true, 00:19:06.610 "write": true, 00:19:06.610 "unmap": false, 00:19:06.610 "flush": false, 00:19:06.610 "reset": true, 00:19:06.610 "nvme_admin": false, 00:19:06.610 "nvme_io": false, 00:19:06.610 "nvme_io_md": false, 00:19:06.610 "write_zeroes": true, 00:19:06.610 "zcopy": false, 00:19:06.610 "get_zone_info": false, 00:19:06.610 "zone_management": false, 00:19:06.610 "zone_append": false, 00:19:06.610 "compare": false, 00:19:06.610 "compare_and_write": false, 00:19:06.610 "abort": false, 00:19:06.610 "seek_hole": false, 00:19:06.610 "seek_data": false, 00:19:06.610 "copy": false, 00:19:06.610 "nvme_iov_md": false 00:19:06.610 }, 00:19:06.610 "memory_domains": [ 00:19:06.610 { 00:19:06.610 "dma_device_id": "system", 00:19:06.610 "dma_device_type": 1 00:19:06.610 }, 00:19:06.610 { 00:19:06.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.610 "dma_device_type": 2 00:19:06.610 }, 00:19:06.610 { 00:19:06.610 "dma_device_id": "system", 00:19:06.610 "dma_device_type": 1 00:19:06.610 }, 00:19:06.610 { 00:19:06.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.610 "dma_device_type": 2 00:19:06.610 } 00:19:06.610 ], 00:19:06.610 "driver_specific": { 00:19:06.610 "raid": { 00:19:06.610 "uuid": "b5efb5fa-4363-452d-9257-8e26672ad65e", 00:19:06.610 "strip_size_kb": 0, 00:19:06.610 "state": "online", 00:19:06.610 "raid_level": "raid1", 00:19:06.610 "superblock": true, 00:19:06.610 "num_base_bdevs": 2, 00:19:06.610 "num_base_bdevs_discovered": 2, 00:19:06.610 "num_base_bdevs_operational": 2, 00:19:06.610 "base_bdevs_list": [ 00:19:06.610 { 00:19:06.610 "name": "BaseBdev1", 00:19:06.610 "uuid": "601f0d56-ba0a-4f8e-b2df-82082eca3513", 00:19:06.610 "is_configured": true, 00:19:06.610 "data_offset": 256, 00:19:06.610 "data_size": 7936 00:19:06.610 }, 00:19:06.610 { 00:19:06.610 "name": "BaseBdev2", 00:19:06.610 "uuid": "a8f669e3-7dd0-4c3d-b65d-d1243c4e2867", 00:19:06.610 "is_configured": true, 00:19:06.610 "data_offset": 256, 00:19:06.610 "data_size": 7936 00:19:06.610 } 00:19:06.610 ] 00:19:06.610 } 00:19:06.610 } 00:19:06.610 }' 00:19:06.610 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:06.610 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:06.610 BaseBdev2' 00:19:06.610 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.871 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:06.871 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:06.872 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:06.872 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.872 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.872 04:09:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.872 [2024-12-06 04:10:00.103135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.872 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.133 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.133 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.133 "name": "Existed_Raid", 00:19:07.133 "uuid": "b5efb5fa-4363-452d-9257-8e26672ad65e", 00:19:07.133 "strip_size_kb": 0, 00:19:07.133 "state": "online", 00:19:07.133 "raid_level": "raid1", 00:19:07.133 "superblock": true, 00:19:07.133 "num_base_bdevs": 2, 00:19:07.133 "num_base_bdevs_discovered": 1, 00:19:07.133 "num_base_bdevs_operational": 1, 00:19:07.133 "base_bdevs_list": [ 00:19:07.133 { 00:19:07.133 "name": null, 00:19:07.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.133 "is_configured": false, 00:19:07.133 "data_offset": 0, 00:19:07.133 "data_size": 7936 00:19:07.133 }, 00:19:07.133 { 00:19:07.133 "name": "BaseBdev2", 00:19:07.133 "uuid": "a8f669e3-7dd0-4c3d-b65d-d1243c4e2867", 00:19:07.133 "is_configured": true, 00:19:07.133 "data_offset": 256, 00:19:07.133 "data_size": 7936 00:19:07.133 } 00:19:07.133 ] 00:19:07.133 }' 00:19:07.133 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.133 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.393 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:07.393 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:07.393 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.393 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.393 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.393 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:07.393 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.393 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:07.393 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:07.393 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:07.393 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.393 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.393 [2024-12-06 04:10:00.738839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:07.393 [2024-12-06 04:10:00.738986] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:07.654 [2024-12-06 04:10:00.831458] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.654 [2024-12-06 04:10:00.831563] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:07.654 [2024-12-06 04:10:00.831604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:07.654 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.654 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:07.654 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:07.655 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.655 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:07.655 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.655 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.655 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.655 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:07.655 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:07.655 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:07.655 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86208 00:19:07.655 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86208 ']' 00:19:07.655 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86208 00:19:07.655 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:19:07.655 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.655 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86208 00:19:07.655 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.655 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.655 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86208' 00:19:07.655 killing process with pid 86208 00:19:07.655 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86208 00:19:07.655 [2024-12-06 04:10:00.928412] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:07.655 04:10:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86208 00:19:07.655 [2024-12-06 04:10:00.944617] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:09.035 ************************************ 00:19:09.035 END TEST raid_state_function_test_sb_4k 00:19:09.035 ************************************ 00:19:09.036 04:10:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:19:09.036 00:19:09.036 real 0m5.050s 00:19:09.036 user 0m7.305s 00:19:09.036 sys 0m0.857s 00:19:09.036 04:10:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.036 04:10:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.036 04:10:02 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:19:09.036 04:10:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:09.036 04:10:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:09.036 04:10:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:09.036 ************************************ 00:19:09.036 START TEST raid_superblock_test_4k 00:19:09.036 ************************************ 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86460 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86460 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86460 ']' 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:09.036 04:10:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.036 [2024-12-06 04:10:02.202080] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:19:09.036 [2024-12-06 04:10:02.202619] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86460 ] 00:19:09.036 [2024-12-06 04:10:02.359758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.296 [2024-12-06 04:10:02.470334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.556 [2024-12-06 04:10:02.661145] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:09.556 [2024-12-06 04:10:02.661209] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.816 malloc1 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.816 [2024-12-06 04:10:03.089357] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:09.816 [2024-12-06 04:10:03.089477] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.816 [2024-12-06 04:10:03.089520] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:09.816 [2024-12-06 04:10:03.089583] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.816 [2024-12-06 04:10:03.091647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.816 [2024-12-06 04:10:03.091723] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:09.816 pt1 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.816 malloc2 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.816 [2024-12-06 04:10:03.147684] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:09.816 [2024-12-06 04:10:03.147787] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.816 [2024-12-06 04:10:03.147832] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:09.816 [2024-12-06 04:10:03.147841] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.816 [2024-12-06 04:10:03.149857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.816 [2024-12-06 04:10:03.149895] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:09.816 pt2 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.816 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.816 [2024-12-06 04:10:03.159705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:09.816 [2024-12-06 04:10:03.161447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:09.816 [2024-12-06 04:10:03.161624] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:09.817 [2024-12-06 04:10:03.161640] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:09.817 [2024-12-06 04:10:03.161884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:09.817 [2024-12-06 04:10:03.162034] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:09.817 [2024-12-06 04:10:03.162049] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:09.817 [2024-12-06 04:10:03.162205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.817 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.817 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:09.817 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.817 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.817 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.817 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.817 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:09.817 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.817 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.817 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.817 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.076 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.076 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.076 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.076 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.076 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.076 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.076 "name": "raid_bdev1", 00:19:10.076 "uuid": "1826b344-6f6d-488b-bcb3-c95a96472d1c", 00:19:10.076 "strip_size_kb": 0, 00:19:10.076 "state": "online", 00:19:10.076 "raid_level": "raid1", 00:19:10.076 "superblock": true, 00:19:10.076 "num_base_bdevs": 2, 00:19:10.076 "num_base_bdevs_discovered": 2, 00:19:10.076 "num_base_bdevs_operational": 2, 00:19:10.076 "base_bdevs_list": [ 00:19:10.076 { 00:19:10.076 "name": "pt1", 00:19:10.076 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:10.076 "is_configured": true, 00:19:10.076 "data_offset": 256, 00:19:10.076 "data_size": 7936 00:19:10.076 }, 00:19:10.076 { 00:19:10.076 "name": "pt2", 00:19:10.076 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:10.076 "is_configured": true, 00:19:10.076 "data_offset": 256, 00:19:10.076 "data_size": 7936 00:19:10.076 } 00:19:10.076 ] 00:19:10.076 }' 00:19:10.076 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.076 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.335 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:10.335 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:10.335 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:10.335 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:10.335 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:10.335 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:10.335 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:10.335 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.335 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.335 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:10.335 [2024-12-06 04:10:03.635162] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:10.335 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.335 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:10.335 "name": "raid_bdev1", 00:19:10.335 "aliases": [ 00:19:10.335 "1826b344-6f6d-488b-bcb3-c95a96472d1c" 00:19:10.335 ], 00:19:10.335 "product_name": "Raid Volume", 00:19:10.335 "block_size": 4096, 00:19:10.335 "num_blocks": 7936, 00:19:10.335 "uuid": "1826b344-6f6d-488b-bcb3-c95a96472d1c", 00:19:10.335 "assigned_rate_limits": { 00:19:10.335 "rw_ios_per_sec": 0, 00:19:10.335 "rw_mbytes_per_sec": 0, 00:19:10.335 "r_mbytes_per_sec": 0, 00:19:10.335 "w_mbytes_per_sec": 0 00:19:10.335 }, 00:19:10.335 "claimed": false, 00:19:10.335 "zoned": false, 00:19:10.335 "supported_io_types": { 00:19:10.335 "read": true, 00:19:10.335 "write": true, 00:19:10.335 "unmap": false, 00:19:10.335 "flush": false, 00:19:10.335 "reset": true, 00:19:10.335 "nvme_admin": false, 00:19:10.335 "nvme_io": false, 00:19:10.335 "nvme_io_md": false, 00:19:10.335 "write_zeroes": true, 00:19:10.335 "zcopy": false, 00:19:10.335 "get_zone_info": false, 00:19:10.335 "zone_management": false, 00:19:10.335 "zone_append": false, 00:19:10.335 "compare": false, 00:19:10.335 "compare_and_write": false, 00:19:10.335 "abort": false, 00:19:10.335 "seek_hole": false, 00:19:10.335 "seek_data": false, 00:19:10.335 "copy": false, 00:19:10.335 "nvme_iov_md": false 00:19:10.335 }, 00:19:10.335 "memory_domains": [ 00:19:10.335 { 00:19:10.335 "dma_device_id": "system", 00:19:10.335 "dma_device_type": 1 00:19:10.335 }, 00:19:10.335 { 00:19:10.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.335 "dma_device_type": 2 00:19:10.335 }, 00:19:10.335 { 00:19:10.335 "dma_device_id": "system", 00:19:10.335 "dma_device_type": 1 00:19:10.335 }, 00:19:10.335 { 00:19:10.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.335 "dma_device_type": 2 00:19:10.335 } 00:19:10.335 ], 00:19:10.335 "driver_specific": { 00:19:10.335 "raid": { 00:19:10.335 "uuid": "1826b344-6f6d-488b-bcb3-c95a96472d1c", 00:19:10.335 "strip_size_kb": 0, 00:19:10.335 "state": "online", 00:19:10.335 "raid_level": "raid1", 00:19:10.335 "superblock": true, 00:19:10.335 "num_base_bdevs": 2, 00:19:10.335 "num_base_bdevs_discovered": 2, 00:19:10.335 "num_base_bdevs_operational": 2, 00:19:10.335 "base_bdevs_list": [ 00:19:10.335 { 00:19:10.335 "name": "pt1", 00:19:10.335 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:10.335 "is_configured": true, 00:19:10.335 "data_offset": 256, 00:19:10.335 "data_size": 7936 00:19:10.335 }, 00:19:10.335 { 00:19:10.335 "name": "pt2", 00:19:10.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:10.335 "is_configured": true, 00:19:10.335 "data_offset": 256, 00:19:10.335 "data_size": 7936 00:19:10.335 } 00:19:10.335 ] 00:19:10.335 } 00:19:10.335 } 00:19:10.335 }' 00:19:10.335 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:10.595 pt2' 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.595 [2024-12-06 04:10:03.858745] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1826b344-6f6d-488b-bcb3-c95a96472d1c 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 1826b344-6f6d-488b-bcb3-c95a96472d1c ']' 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.595 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.595 [2024-12-06 04:10:03.898381] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:10.595 [2024-12-06 04:10:03.898404] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:10.595 [2024-12-06 04:10:03.898482] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:10.595 [2024-12-06 04:10:03.898538] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:10.596 [2024-12-06 04:10:03.898549] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:10.596 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.596 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:10.596 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.596 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.596 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.596 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.855 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:10.855 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:10.855 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:10.855 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:10.855 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.855 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.855 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.855 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:10.855 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:10.855 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.855 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.855 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.855 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:10.855 04:10:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:10.855 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.855 04:10:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.855 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.855 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:10.855 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:10.855 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:19:10.855 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:10.855 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:10.855 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:10.855 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:10.855 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:10.855 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:10.855 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.855 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.855 [2024-12-06 04:10:04.046231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:10.855 [2024-12-06 04:10:04.048108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:10.855 [2024-12-06 04:10:04.048224] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:10.855 [2024-12-06 04:10:04.048330] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:10.855 [2024-12-06 04:10:04.048387] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:10.855 [2024-12-06 04:10:04.048424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:10.855 request: 00:19:10.855 { 00:19:10.855 "name": "raid_bdev1", 00:19:10.855 "raid_level": "raid1", 00:19:10.855 "base_bdevs": [ 00:19:10.856 "malloc1", 00:19:10.856 "malloc2" 00:19:10.856 ], 00:19:10.856 "superblock": false, 00:19:10.856 "method": "bdev_raid_create", 00:19:10.856 "req_id": 1 00:19:10.856 } 00:19:10.856 Got JSON-RPC error response 00:19:10.856 response: 00:19:10.856 { 00:19:10.856 "code": -17, 00:19:10.856 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:10.856 } 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.856 [2024-12-06 04:10:04.114104] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:10.856 [2024-12-06 04:10:04.114252] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:10.856 [2024-12-06 04:10:04.114277] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:10.856 [2024-12-06 04:10:04.114289] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:10.856 [2024-12-06 04:10:04.116508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:10.856 [2024-12-06 04:10:04.116549] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:10.856 [2024-12-06 04:10:04.116668] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:10.856 [2024-12-06 04:10:04.116732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:10.856 pt1 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.856 "name": "raid_bdev1", 00:19:10.856 "uuid": "1826b344-6f6d-488b-bcb3-c95a96472d1c", 00:19:10.856 "strip_size_kb": 0, 00:19:10.856 "state": "configuring", 00:19:10.856 "raid_level": "raid1", 00:19:10.856 "superblock": true, 00:19:10.856 "num_base_bdevs": 2, 00:19:10.856 "num_base_bdevs_discovered": 1, 00:19:10.856 "num_base_bdevs_operational": 2, 00:19:10.856 "base_bdevs_list": [ 00:19:10.856 { 00:19:10.856 "name": "pt1", 00:19:10.856 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:10.856 "is_configured": true, 00:19:10.856 "data_offset": 256, 00:19:10.856 "data_size": 7936 00:19:10.856 }, 00:19:10.856 { 00:19:10.856 "name": null, 00:19:10.856 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:10.856 "is_configured": false, 00:19:10.856 "data_offset": 256, 00:19:10.856 "data_size": 7936 00:19:10.856 } 00:19:10.856 ] 00:19:10.856 }' 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.856 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.423 [2024-12-06 04:10:04.529374] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:11.423 [2024-12-06 04:10:04.529499] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.423 [2024-12-06 04:10:04.529540] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:11.423 [2024-12-06 04:10:04.529571] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.423 [2024-12-06 04:10:04.530035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.423 [2024-12-06 04:10:04.530109] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:11.423 [2024-12-06 04:10:04.530213] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:11.423 [2024-12-06 04:10:04.530268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:11.423 [2024-12-06 04:10:04.530398] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:11.423 [2024-12-06 04:10:04.530437] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:11.423 [2024-12-06 04:10:04.530687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:11.423 [2024-12-06 04:10:04.530866] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:11.423 [2024-12-06 04:10:04.530904] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:11.423 [2024-12-06 04:10:04.531107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.423 pt2 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.423 "name": "raid_bdev1", 00:19:11.423 "uuid": "1826b344-6f6d-488b-bcb3-c95a96472d1c", 00:19:11.423 "strip_size_kb": 0, 00:19:11.423 "state": "online", 00:19:11.423 "raid_level": "raid1", 00:19:11.423 "superblock": true, 00:19:11.423 "num_base_bdevs": 2, 00:19:11.423 "num_base_bdevs_discovered": 2, 00:19:11.423 "num_base_bdevs_operational": 2, 00:19:11.423 "base_bdevs_list": [ 00:19:11.423 { 00:19:11.423 "name": "pt1", 00:19:11.423 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:11.423 "is_configured": true, 00:19:11.423 "data_offset": 256, 00:19:11.423 "data_size": 7936 00:19:11.423 }, 00:19:11.423 { 00:19:11.423 "name": "pt2", 00:19:11.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:11.423 "is_configured": true, 00:19:11.423 "data_offset": 256, 00:19:11.423 "data_size": 7936 00:19:11.423 } 00:19:11.423 ] 00:19:11.423 }' 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.423 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.682 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:11.682 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:11.682 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:11.682 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:11.682 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:11.682 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:11.682 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:11.682 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.682 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.682 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:11.682 [2024-12-06 04:10:04.932927] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:11.682 04:10:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.682 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:11.682 "name": "raid_bdev1", 00:19:11.682 "aliases": [ 00:19:11.682 "1826b344-6f6d-488b-bcb3-c95a96472d1c" 00:19:11.682 ], 00:19:11.682 "product_name": "Raid Volume", 00:19:11.682 "block_size": 4096, 00:19:11.682 "num_blocks": 7936, 00:19:11.682 "uuid": "1826b344-6f6d-488b-bcb3-c95a96472d1c", 00:19:11.682 "assigned_rate_limits": { 00:19:11.682 "rw_ios_per_sec": 0, 00:19:11.682 "rw_mbytes_per_sec": 0, 00:19:11.682 "r_mbytes_per_sec": 0, 00:19:11.682 "w_mbytes_per_sec": 0 00:19:11.682 }, 00:19:11.682 "claimed": false, 00:19:11.682 "zoned": false, 00:19:11.682 "supported_io_types": { 00:19:11.682 "read": true, 00:19:11.682 "write": true, 00:19:11.682 "unmap": false, 00:19:11.682 "flush": false, 00:19:11.682 "reset": true, 00:19:11.682 "nvme_admin": false, 00:19:11.682 "nvme_io": false, 00:19:11.682 "nvme_io_md": false, 00:19:11.682 "write_zeroes": true, 00:19:11.682 "zcopy": false, 00:19:11.682 "get_zone_info": false, 00:19:11.682 "zone_management": false, 00:19:11.682 "zone_append": false, 00:19:11.682 "compare": false, 00:19:11.682 "compare_and_write": false, 00:19:11.682 "abort": false, 00:19:11.682 "seek_hole": false, 00:19:11.682 "seek_data": false, 00:19:11.682 "copy": false, 00:19:11.682 "nvme_iov_md": false 00:19:11.682 }, 00:19:11.682 "memory_domains": [ 00:19:11.682 { 00:19:11.682 "dma_device_id": "system", 00:19:11.682 "dma_device_type": 1 00:19:11.682 }, 00:19:11.682 { 00:19:11.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.682 "dma_device_type": 2 00:19:11.682 }, 00:19:11.682 { 00:19:11.682 "dma_device_id": "system", 00:19:11.682 "dma_device_type": 1 00:19:11.682 }, 00:19:11.682 { 00:19:11.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.682 "dma_device_type": 2 00:19:11.682 } 00:19:11.682 ], 00:19:11.682 "driver_specific": { 00:19:11.682 "raid": { 00:19:11.682 "uuid": "1826b344-6f6d-488b-bcb3-c95a96472d1c", 00:19:11.682 "strip_size_kb": 0, 00:19:11.682 "state": "online", 00:19:11.682 "raid_level": "raid1", 00:19:11.682 "superblock": true, 00:19:11.682 "num_base_bdevs": 2, 00:19:11.682 "num_base_bdevs_discovered": 2, 00:19:11.682 "num_base_bdevs_operational": 2, 00:19:11.682 "base_bdevs_list": [ 00:19:11.682 { 00:19:11.682 "name": "pt1", 00:19:11.682 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:11.682 "is_configured": true, 00:19:11.682 "data_offset": 256, 00:19:11.682 "data_size": 7936 00:19:11.682 }, 00:19:11.682 { 00:19:11.682 "name": "pt2", 00:19:11.682 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:11.682 "is_configured": true, 00:19:11.682 "data_offset": 256, 00:19:11.682 "data_size": 7936 00:19:11.682 } 00:19:11.682 ] 00:19:11.682 } 00:19:11.682 } 00:19:11.682 }' 00:19:11.682 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:11.682 04:10:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:11.682 pt2' 00:19:11.682 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.940 [2024-12-06 04:10:05.164475] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 1826b344-6f6d-488b-bcb3-c95a96472d1c '!=' 1826b344-6f6d-488b-bcb3-c95a96472d1c ']' 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.940 [2024-12-06 04:10:05.212191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.940 "name": "raid_bdev1", 00:19:11.940 "uuid": "1826b344-6f6d-488b-bcb3-c95a96472d1c", 00:19:11.940 "strip_size_kb": 0, 00:19:11.940 "state": "online", 00:19:11.940 "raid_level": "raid1", 00:19:11.940 "superblock": true, 00:19:11.940 "num_base_bdevs": 2, 00:19:11.940 "num_base_bdevs_discovered": 1, 00:19:11.940 "num_base_bdevs_operational": 1, 00:19:11.940 "base_bdevs_list": [ 00:19:11.940 { 00:19:11.940 "name": null, 00:19:11.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.940 "is_configured": false, 00:19:11.940 "data_offset": 0, 00:19:11.940 "data_size": 7936 00:19:11.940 }, 00:19:11.940 { 00:19:11.940 "name": "pt2", 00:19:11.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:11.940 "is_configured": true, 00:19:11.940 "data_offset": 256, 00:19:11.940 "data_size": 7936 00:19:11.940 } 00:19:11.940 ] 00:19:11.940 }' 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.940 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.509 [2024-12-06 04:10:05.603535] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:12.509 [2024-12-06 04:10:05.603609] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:12.509 [2024-12-06 04:10:05.603701] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:12.509 [2024-12-06 04:10:05.603760] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:12.509 [2024-12-06 04:10:05.603803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.509 [2024-12-06 04:10:05.679396] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:12.509 [2024-12-06 04:10:05.679497] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.509 [2024-12-06 04:10:05.679517] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:12.509 [2024-12-06 04:10:05.679528] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.509 [2024-12-06 04:10:05.681674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.509 [2024-12-06 04:10:05.681717] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:12.509 [2024-12-06 04:10:05.681796] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:12.509 [2024-12-06 04:10:05.681848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:12.509 [2024-12-06 04:10:05.681956] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:12.509 [2024-12-06 04:10:05.681968] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:12.509 [2024-12-06 04:10:05.682208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:12.509 [2024-12-06 04:10:05.682396] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:12.509 [2024-12-06 04:10:05.682410] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:12.509 [2024-12-06 04:10:05.682543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.509 pt2 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.509 "name": "raid_bdev1", 00:19:12.509 "uuid": "1826b344-6f6d-488b-bcb3-c95a96472d1c", 00:19:12.509 "strip_size_kb": 0, 00:19:12.509 "state": "online", 00:19:12.509 "raid_level": "raid1", 00:19:12.509 "superblock": true, 00:19:12.509 "num_base_bdevs": 2, 00:19:12.509 "num_base_bdevs_discovered": 1, 00:19:12.509 "num_base_bdevs_operational": 1, 00:19:12.509 "base_bdevs_list": [ 00:19:12.509 { 00:19:12.509 "name": null, 00:19:12.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.509 "is_configured": false, 00:19:12.509 "data_offset": 256, 00:19:12.509 "data_size": 7936 00:19:12.509 }, 00:19:12.509 { 00:19:12.509 "name": "pt2", 00:19:12.509 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:12.509 "is_configured": true, 00:19:12.509 "data_offset": 256, 00:19:12.509 "data_size": 7936 00:19:12.509 } 00:19:12.509 ] 00:19:12.509 }' 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.509 04:10:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.087 [2024-12-06 04:10:06.126609] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:13.087 [2024-12-06 04:10:06.126701] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:13.087 [2024-12-06 04:10:06.126800] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:13.087 [2024-12-06 04:10:06.126866] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:13.087 [2024-12-06 04:10:06.126908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.087 [2024-12-06 04:10:06.178557] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:13.087 [2024-12-06 04:10:06.178686] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:13.087 [2024-12-06 04:10:06.178723] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:13.087 [2024-12-06 04:10:06.178755] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:13.087 [2024-12-06 04:10:06.180906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:13.087 [2024-12-06 04:10:06.180991] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:13.087 [2024-12-06 04:10:06.181124] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:13.087 [2024-12-06 04:10:06.181206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:13.087 [2024-12-06 04:10:06.181397] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:13.087 [2024-12-06 04:10:06.181454] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:13.087 [2024-12-06 04:10:06.181494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:13.087 [2024-12-06 04:10:06.181598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:13.087 [2024-12-06 04:10:06.181707] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:13.087 [2024-12-06 04:10:06.181719] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:13.087 [2024-12-06 04:10:06.181973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:13.087 [2024-12-06 04:10:06.182140] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:13.087 [2024-12-06 04:10:06.182155] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:13.087 [2024-12-06 04:10:06.182310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.087 pt1 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.087 "name": "raid_bdev1", 00:19:13.087 "uuid": "1826b344-6f6d-488b-bcb3-c95a96472d1c", 00:19:13.087 "strip_size_kb": 0, 00:19:13.087 "state": "online", 00:19:13.087 "raid_level": "raid1", 00:19:13.087 "superblock": true, 00:19:13.087 "num_base_bdevs": 2, 00:19:13.087 "num_base_bdevs_discovered": 1, 00:19:13.087 "num_base_bdevs_operational": 1, 00:19:13.087 "base_bdevs_list": [ 00:19:13.087 { 00:19:13.087 "name": null, 00:19:13.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.087 "is_configured": false, 00:19:13.087 "data_offset": 256, 00:19:13.087 "data_size": 7936 00:19:13.087 }, 00:19:13.087 { 00:19:13.087 "name": "pt2", 00:19:13.087 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:13.087 "is_configured": true, 00:19:13.087 "data_offset": 256, 00:19:13.087 "data_size": 7936 00:19:13.087 } 00:19:13.087 ] 00:19:13.087 }' 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.087 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.346 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:13.347 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:13.347 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.347 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.347 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.347 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:13.347 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:13.347 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.347 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.347 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:13.347 [2024-12-06 04:10:06.653944] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:13.347 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.347 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 1826b344-6f6d-488b-bcb3-c95a96472d1c '!=' 1826b344-6f6d-488b-bcb3-c95a96472d1c ']' 00:19:13.347 04:10:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86460 00:19:13.347 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86460 ']' 00:19:13.347 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86460 00:19:13.347 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:19:13.347 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.347 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86460 00:19:13.605 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:13.605 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:13.605 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86460' 00:19:13.605 killing process with pid 86460 00:19:13.605 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86460 00:19:13.605 [2024-12-06 04:10:06.718242] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:13.605 [2024-12-06 04:10:06.718336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:13.605 [2024-12-06 04:10:06.718382] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:13.605 [2024-12-06 04:10:06.718398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:13.605 04:10:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86460 00:19:13.605 [2024-12-06 04:10:06.924122] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:14.983 04:10:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:19:14.983 00:19:14.983 real 0m5.906s 00:19:14.983 user 0m8.909s 00:19:14.983 sys 0m1.093s 00:19:14.983 ************************************ 00:19:14.983 END TEST raid_superblock_test_4k 00:19:14.983 ************************************ 00:19:14.983 04:10:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.983 04:10:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.983 04:10:08 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:19:14.983 04:10:08 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:19:14.983 04:10:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:14.983 04:10:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.983 04:10:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:14.983 ************************************ 00:19:14.983 START TEST raid_rebuild_test_sb_4k 00:19:14.983 ************************************ 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86777 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86777 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86777 ']' 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.983 04:10:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.983 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:14.983 Zero copy mechanism will not be used. 00:19:14.983 [2024-12-06 04:10:08.190121] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:19:14.984 [2024-12-06 04:10:08.190244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86777 ] 00:19:15.243 [2024-12-06 04:10:08.348667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.243 [2024-12-06 04:10:08.457696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.503 [2024-12-06 04:10:08.652353] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:15.503 [2024-12-06 04:10:08.652424] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:15.762 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.762 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:19:15.762 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:15.762 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:19:15.762 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.762 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.762 BaseBdev1_malloc 00:19:15.762 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.762 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:15.762 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.762 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.762 [2024-12-06 04:10:09.080293] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:15.762 [2024-12-06 04:10:09.080404] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.762 [2024-12-06 04:10:09.080429] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:15.762 [2024-12-06 04:10:09.080441] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.762 [2024-12-06 04:10:09.082430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.762 [2024-12-06 04:10:09.082472] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:15.762 BaseBdev1 00:19:15.762 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.762 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:15.762 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:19:15.762 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.762 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.022 BaseBdev2_malloc 00:19:16.022 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.023 [2024-12-06 04:10:09.135216] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:16.023 [2024-12-06 04:10:09.135358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.023 [2024-12-06 04:10:09.135388] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:16.023 [2024-12-06 04:10:09.135400] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.023 [2024-12-06 04:10:09.137739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.023 [2024-12-06 04:10:09.137781] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:16.023 BaseBdev2 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.023 spare_malloc 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.023 spare_delay 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.023 [2024-12-06 04:10:09.216181] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:16.023 [2024-12-06 04:10:09.216247] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.023 [2024-12-06 04:10:09.216284] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:16.023 [2024-12-06 04:10:09.216295] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.023 [2024-12-06 04:10:09.218447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.023 [2024-12-06 04:10:09.218568] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:16.023 spare 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.023 [2024-12-06 04:10:09.228226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:16.023 [2024-12-06 04:10:09.229963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:16.023 [2024-12-06 04:10:09.230225] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:16.023 [2024-12-06 04:10:09.230245] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:16.023 [2024-12-06 04:10:09.230477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:16.023 [2024-12-06 04:10:09.230650] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:16.023 [2024-12-06 04:10:09.230659] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:16.023 [2024-12-06 04:10:09.230812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.023 "name": "raid_bdev1", 00:19:16.023 "uuid": "a94e3b71-9c17-4bcb-93fd-288d9e1a7c91", 00:19:16.023 "strip_size_kb": 0, 00:19:16.023 "state": "online", 00:19:16.023 "raid_level": "raid1", 00:19:16.023 "superblock": true, 00:19:16.023 "num_base_bdevs": 2, 00:19:16.023 "num_base_bdevs_discovered": 2, 00:19:16.023 "num_base_bdevs_operational": 2, 00:19:16.023 "base_bdevs_list": [ 00:19:16.023 { 00:19:16.023 "name": "BaseBdev1", 00:19:16.023 "uuid": "d1392ffa-b0a1-51ae-896c-bd840f1f8ce8", 00:19:16.023 "is_configured": true, 00:19:16.023 "data_offset": 256, 00:19:16.023 "data_size": 7936 00:19:16.023 }, 00:19:16.023 { 00:19:16.023 "name": "BaseBdev2", 00:19:16.023 "uuid": "1d52e564-03e0-540f-b459-0f4a9ae667f6", 00:19:16.023 "is_configured": true, 00:19:16.023 "data_offset": 256, 00:19:16.023 "data_size": 7936 00:19:16.023 } 00:19:16.023 ] 00:19:16.023 }' 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.023 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:16.594 [2024-12-06 04:10:09.687736] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:16.594 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:16.855 [2024-12-06 04:10:09.951036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:16.855 /dev/nbd0 00:19:16.855 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:16.855 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:16.855 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:16.855 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:16.855 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:16.855 04:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:16.855 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:16.855 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:16.855 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:16.855 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:16.855 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:16.855 1+0 records in 00:19:16.855 1+0 records out 00:19:16.855 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508746 s, 8.1 MB/s 00:19:16.855 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.855 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:16.855 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.855 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:16.855 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:16.855 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:16.855 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:16.855 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:16.855 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:16.855 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:17.420 7936+0 records in 00:19:17.420 7936+0 records out 00:19:17.420 32505856 bytes (33 MB, 31 MiB) copied, 0.626053 s, 51.9 MB/s 00:19:17.420 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:17.420 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:17.420 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:17.420 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:17.420 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:17.420 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:17.420 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:17.681 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:17.681 [2024-12-06 04:10:10.877582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.681 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:17.681 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:17.681 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:17.681 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:17.681 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:17.681 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:17.681 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:17.681 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:17.681 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.681 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.681 [2024-12-06 04:10:10.889667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:17.681 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.681 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:17.681 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.682 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.682 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:17.682 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:17.682 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:17.682 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.682 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.682 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.682 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.682 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.682 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.682 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.682 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.682 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.682 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.682 "name": "raid_bdev1", 00:19:17.682 "uuid": "a94e3b71-9c17-4bcb-93fd-288d9e1a7c91", 00:19:17.682 "strip_size_kb": 0, 00:19:17.682 "state": "online", 00:19:17.682 "raid_level": "raid1", 00:19:17.682 "superblock": true, 00:19:17.682 "num_base_bdevs": 2, 00:19:17.682 "num_base_bdevs_discovered": 1, 00:19:17.682 "num_base_bdevs_operational": 1, 00:19:17.682 "base_bdevs_list": [ 00:19:17.682 { 00:19:17.682 "name": null, 00:19:17.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.682 "is_configured": false, 00:19:17.682 "data_offset": 0, 00:19:17.682 "data_size": 7936 00:19:17.682 }, 00:19:17.682 { 00:19:17.682 "name": "BaseBdev2", 00:19:17.682 "uuid": "1d52e564-03e0-540f-b459-0f4a9ae667f6", 00:19:17.682 "is_configured": true, 00:19:17.682 "data_offset": 256, 00:19:17.682 "data_size": 7936 00:19:17.682 } 00:19:17.682 ] 00:19:17.682 }' 00:19:17.682 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.682 04:10:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.945 04:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:18.204 04:10:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.204 04:10:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.204 [2024-12-06 04:10:11.301012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.204 [2024-12-06 04:10:11.318878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:18.204 04:10:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.204 04:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:18.204 [2024-12-06 04:10:11.320698] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:19.173 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:19.173 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.173 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:19.173 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:19.173 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.173 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.173 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.173 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.173 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.173 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.173 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.173 "name": "raid_bdev1", 00:19:19.173 "uuid": "a94e3b71-9c17-4bcb-93fd-288d9e1a7c91", 00:19:19.173 "strip_size_kb": 0, 00:19:19.173 "state": "online", 00:19:19.173 "raid_level": "raid1", 00:19:19.173 "superblock": true, 00:19:19.173 "num_base_bdevs": 2, 00:19:19.173 "num_base_bdevs_discovered": 2, 00:19:19.173 "num_base_bdevs_operational": 2, 00:19:19.173 "process": { 00:19:19.173 "type": "rebuild", 00:19:19.173 "target": "spare", 00:19:19.174 "progress": { 00:19:19.174 "blocks": 2560, 00:19:19.174 "percent": 32 00:19:19.174 } 00:19:19.174 }, 00:19:19.174 "base_bdevs_list": [ 00:19:19.174 { 00:19:19.174 "name": "spare", 00:19:19.174 "uuid": "96bf506b-74be-5184-ada8-f6d2d8515dbd", 00:19:19.174 "is_configured": true, 00:19:19.174 "data_offset": 256, 00:19:19.174 "data_size": 7936 00:19:19.174 }, 00:19:19.174 { 00:19:19.174 "name": "BaseBdev2", 00:19:19.174 "uuid": "1d52e564-03e0-540f-b459-0f4a9ae667f6", 00:19:19.174 "is_configured": true, 00:19:19.174 "data_offset": 256, 00:19:19.174 "data_size": 7936 00:19:19.174 } 00:19:19.174 ] 00:19:19.174 }' 00:19:19.174 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.174 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:19.174 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.174 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:19.174 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:19.174 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.174 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.174 [2024-12-06 04:10:12.479932] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:19.174 [2024-12-06 04:10:12.525982] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:19.174 [2024-12-06 04:10:12.526075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.174 [2024-12-06 04:10:12.526091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:19.174 [2024-12-06 04:10:12.526116] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:19.433 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.434 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:19.434 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.434 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.434 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:19.434 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:19.434 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:19.434 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.434 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.434 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.434 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.434 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.434 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.434 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.434 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.434 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.434 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.434 "name": "raid_bdev1", 00:19:19.434 "uuid": "a94e3b71-9c17-4bcb-93fd-288d9e1a7c91", 00:19:19.434 "strip_size_kb": 0, 00:19:19.434 "state": "online", 00:19:19.434 "raid_level": "raid1", 00:19:19.434 "superblock": true, 00:19:19.434 "num_base_bdevs": 2, 00:19:19.434 "num_base_bdevs_discovered": 1, 00:19:19.434 "num_base_bdevs_operational": 1, 00:19:19.434 "base_bdevs_list": [ 00:19:19.434 { 00:19:19.434 "name": null, 00:19:19.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.434 "is_configured": false, 00:19:19.434 "data_offset": 0, 00:19:19.434 "data_size": 7936 00:19:19.434 }, 00:19:19.434 { 00:19:19.434 "name": "BaseBdev2", 00:19:19.434 "uuid": "1d52e564-03e0-540f-b459-0f4a9ae667f6", 00:19:19.434 "is_configured": true, 00:19:19.434 "data_offset": 256, 00:19:19.434 "data_size": 7936 00:19:19.434 } 00:19:19.434 ] 00:19:19.434 }' 00:19:19.434 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.434 04:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.694 04:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:19.694 04:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.694 04:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:19.694 04:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:19.694 04:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.694 04:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.694 04:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.694 04:10:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.694 04:10:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.694 04:10:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.954 04:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.954 "name": "raid_bdev1", 00:19:19.954 "uuid": "a94e3b71-9c17-4bcb-93fd-288d9e1a7c91", 00:19:19.954 "strip_size_kb": 0, 00:19:19.954 "state": "online", 00:19:19.954 "raid_level": "raid1", 00:19:19.954 "superblock": true, 00:19:19.954 "num_base_bdevs": 2, 00:19:19.954 "num_base_bdevs_discovered": 1, 00:19:19.954 "num_base_bdevs_operational": 1, 00:19:19.954 "base_bdevs_list": [ 00:19:19.954 { 00:19:19.954 "name": null, 00:19:19.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.954 "is_configured": false, 00:19:19.954 "data_offset": 0, 00:19:19.954 "data_size": 7936 00:19:19.954 }, 00:19:19.954 { 00:19:19.954 "name": "BaseBdev2", 00:19:19.954 "uuid": "1d52e564-03e0-540f-b459-0f4a9ae667f6", 00:19:19.954 "is_configured": true, 00:19:19.954 "data_offset": 256, 00:19:19.954 "data_size": 7936 00:19:19.954 } 00:19:19.954 ] 00:19:19.954 }' 00:19:19.954 04:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.954 04:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:19.954 04:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.954 04:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:19.954 04:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:19.954 04:10:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.954 04:10:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.954 [2024-12-06 04:10:13.136494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:19.954 [2024-12-06 04:10:13.152689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:19.954 04:10:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.954 04:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:19.954 [2024-12-06 04:10:13.154568] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:20.895 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:20.895 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.895 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:20.895 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:20.895 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.895 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.895 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.895 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.895 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.895 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.895 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.895 "name": "raid_bdev1", 00:19:20.895 "uuid": "a94e3b71-9c17-4bcb-93fd-288d9e1a7c91", 00:19:20.895 "strip_size_kb": 0, 00:19:20.895 "state": "online", 00:19:20.895 "raid_level": "raid1", 00:19:20.895 "superblock": true, 00:19:20.895 "num_base_bdevs": 2, 00:19:20.895 "num_base_bdevs_discovered": 2, 00:19:20.895 "num_base_bdevs_operational": 2, 00:19:20.895 "process": { 00:19:20.895 "type": "rebuild", 00:19:20.895 "target": "spare", 00:19:20.895 "progress": { 00:19:20.895 "blocks": 2560, 00:19:20.895 "percent": 32 00:19:20.896 } 00:19:20.896 }, 00:19:20.896 "base_bdevs_list": [ 00:19:20.896 { 00:19:20.896 "name": "spare", 00:19:20.896 "uuid": "96bf506b-74be-5184-ada8-f6d2d8515dbd", 00:19:20.896 "is_configured": true, 00:19:20.896 "data_offset": 256, 00:19:20.896 "data_size": 7936 00:19:20.896 }, 00:19:20.896 { 00:19:20.896 "name": "BaseBdev2", 00:19:20.896 "uuid": "1d52e564-03e0-540f-b459-0f4a9ae667f6", 00:19:20.896 "is_configured": true, 00:19:20.896 "data_offset": 256, 00:19:20.896 "data_size": 7936 00:19:20.896 } 00:19:20.896 ] 00:19:20.896 }' 00:19:20.896 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:21.156 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=692 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.156 "name": "raid_bdev1", 00:19:21.156 "uuid": "a94e3b71-9c17-4bcb-93fd-288d9e1a7c91", 00:19:21.156 "strip_size_kb": 0, 00:19:21.156 "state": "online", 00:19:21.156 "raid_level": "raid1", 00:19:21.156 "superblock": true, 00:19:21.156 "num_base_bdevs": 2, 00:19:21.156 "num_base_bdevs_discovered": 2, 00:19:21.156 "num_base_bdevs_operational": 2, 00:19:21.156 "process": { 00:19:21.156 "type": "rebuild", 00:19:21.156 "target": "spare", 00:19:21.156 "progress": { 00:19:21.156 "blocks": 2816, 00:19:21.156 "percent": 35 00:19:21.156 } 00:19:21.156 }, 00:19:21.156 "base_bdevs_list": [ 00:19:21.156 { 00:19:21.156 "name": "spare", 00:19:21.156 "uuid": "96bf506b-74be-5184-ada8-f6d2d8515dbd", 00:19:21.156 "is_configured": true, 00:19:21.156 "data_offset": 256, 00:19:21.156 "data_size": 7936 00:19:21.156 }, 00:19:21.156 { 00:19:21.156 "name": "BaseBdev2", 00:19:21.156 "uuid": "1d52e564-03e0-540f-b459-0f4a9ae667f6", 00:19:21.156 "is_configured": true, 00:19:21.156 "data_offset": 256, 00:19:21.156 "data_size": 7936 00:19:21.156 } 00:19:21.156 ] 00:19:21.156 }' 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:21.156 04:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:22.537 04:10:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:22.537 04:10:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:22.537 04:10:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.537 04:10:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:22.537 04:10:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:22.537 04:10:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.537 04:10:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.537 04:10:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.537 04:10:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.537 04:10:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.537 04:10:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.537 04:10:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.537 "name": "raid_bdev1", 00:19:22.537 "uuid": "a94e3b71-9c17-4bcb-93fd-288d9e1a7c91", 00:19:22.537 "strip_size_kb": 0, 00:19:22.537 "state": "online", 00:19:22.537 "raid_level": "raid1", 00:19:22.537 "superblock": true, 00:19:22.537 "num_base_bdevs": 2, 00:19:22.537 "num_base_bdevs_discovered": 2, 00:19:22.537 "num_base_bdevs_operational": 2, 00:19:22.537 "process": { 00:19:22.537 "type": "rebuild", 00:19:22.537 "target": "spare", 00:19:22.537 "progress": { 00:19:22.537 "blocks": 5888, 00:19:22.537 "percent": 74 00:19:22.537 } 00:19:22.537 }, 00:19:22.537 "base_bdevs_list": [ 00:19:22.537 { 00:19:22.537 "name": "spare", 00:19:22.537 "uuid": "96bf506b-74be-5184-ada8-f6d2d8515dbd", 00:19:22.537 "is_configured": true, 00:19:22.537 "data_offset": 256, 00:19:22.537 "data_size": 7936 00:19:22.537 }, 00:19:22.537 { 00:19:22.537 "name": "BaseBdev2", 00:19:22.537 "uuid": "1d52e564-03e0-540f-b459-0f4a9ae667f6", 00:19:22.537 "is_configured": true, 00:19:22.537 "data_offset": 256, 00:19:22.537 "data_size": 7936 00:19:22.537 } 00:19:22.537 ] 00:19:22.537 }' 00:19:22.537 04:10:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.537 04:10:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:22.537 04:10:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.537 04:10:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.537 04:10:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:23.106 [2024-12-06 04:10:16.267895] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:23.106 [2024-12-06 04:10:16.268086] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:23.106 [2024-12-06 04:10:16.268203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.383 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:23.383 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:23.384 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.384 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:23.384 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:23.384 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.384 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.384 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.384 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.384 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.384 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.384 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.384 "name": "raid_bdev1", 00:19:23.384 "uuid": "a94e3b71-9c17-4bcb-93fd-288d9e1a7c91", 00:19:23.384 "strip_size_kb": 0, 00:19:23.384 "state": "online", 00:19:23.384 "raid_level": "raid1", 00:19:23.384 "superblock": true, 00:19:23.384 "num_base_bdevs": 2, 00:19:23.384 "num_base_bdevs_discovered": 2, 00:19:23.384 "num_base_bdevs_operational": 2, 00:19:23.384 "base_bdevs_list": [ 00:19:23.384 { 00:19:23.384 "name": "spare", 00:19:23.384 "uuid": "96bf506b-74be-5184-ada8-f6d2d8515dbd", 00:19:23.384 "is_configured": true, 00:19:23.384 "data_offset": 256, 00:19:23.384 "data_size": 7936 00:19:23.384 }, 00:19:23.384 { 00:19:23.384 "name": "BaseBdev2", 00:19:23.384 "uuid": "1d52e564-03e0-540f-b459-0f4a9ae667f6", 00:19:23.384 "is_configured": true, 00:19:23.384 "data_offset": 256, 00:19:23.384 "data_size": 7936 00:19:23.384 } 00:19:23.384 ] 00:19:23.384 }' 00:19:23.384 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.384 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.644 "name": "raid_bdev1", 00:19:23.644 "uuid": "a94e3b71-9c17-4bcb-93fd-288d9e1a7c91", 00:19:23.644 "strip_size_kb": 0, 00:19:23.644 "state": "online", 00:19:23.644 "raid_level": "raid1", 00:19:23.644 "superblock": true, 00:19:23.644 "num_base_bdevs": 2, 00:19:23.644 "num_base_bdevs_discovered": 2, 00:19:23.644 "num_base_bdevs_operational": 2, 00:19:23.644 "base_bdevs_list": [ 00:19:23.644 { 00:19:23.644 "name": "spare", 00:19:23.644 "uuid": "96bf506b-74be-5184-ada8-f6d2d8515dbd", 00:19:23.644 "is_configured": true, 00:19:23.644 "data_offset": 256, 00:19:23.644 "data_size": 7936 00:19:23.644 }, 00:19:23.644 { 00:19:23.644 "name": "BaseBdev2", 00:19:23.644 "uuid": "1d52e564-03e0-540f-b459-0f4a9ae667f6", 00:19:23.644 "is_configured": true, 00:19:23.644 "data_offset": 256, 00:19:23.644 "data_size": 7936 00:19:23.644 } 00:19:23.644 ] 00:19:23.644 }' 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.644 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.645 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.645 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.645 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.645 "name": "raid_bdev1", 00:19:23.645 "uuid": "a94e3b71-9c17-4bcb-93fd-288d9e1a7c91", 00:19:23.645 "strip_size_kb": 0, 00:19:23.645 "state": "online", 00:19:23.645 "raid_level": "raid1", 00:19:23.645 "superblock": true, 00:19:23.645 "num_base_bdevs": 2, 00:19:23.645 "num_base_bdevs_discovered": 2, 00:19:23.645 "num_base_bdevs_operational": 2, 00:19:23.645 "base_bdevs_list": [ 00:19:23.645 { 00:19:23.645 "name": "spare", 00:19:23.645 "uuid": "96bf506b-74be-5184-ada8-f6d2d8515dbd", 00:19:23.645 "is_configured": true, 00:19:23.645 "data_offset": 256, 00:19:23.645 "data_size": 7936 00:19:23.645 }, 00:19:23.645 { 00:19:23.645 "name": "BaseBdev2", 00:19:23.645 "uuid": "1d52e564-03e0-540f-b459-0f4a9ae667f6", 00:19:23.645 "is_configured": true, 00:19:23.645 "data_offset": 256, 00:19:23.645 "data_size": 7936 00:19:23.645 } 00:19:23.645 ] 00:19:23.645 }' 00:19:23.645 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.645 04:10:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.216 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:24.216 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.216 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.216 [2024-12-06 04:10:17.317146] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:24.216 [2024-12-06 04:10:17.317243] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:24.216 [2024-12-06 04:10:17.317346] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:24.216 [2024-12-06 04:10:17.317433] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:24.216 [2024-12-06 04:10:17.317484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:24.217 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.217 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:19:24.217 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.217 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.217 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.217 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.217 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:24.217 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:24.217 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:24.217 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:24.217 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:24.217 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:24.217 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:24.217 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:24.217 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:24.217 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:24.217 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:24.217 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:24.217 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:24.217 /dev/nbd0 00:19:24.477 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:24.477 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:24.477 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:24.477 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:24.477 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:24.477 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:24.477 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:24.477 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:24.477 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:24.477 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:24.477 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:24.477 1+0 records in 00:19:24.477 1+0 records out 00:19:24.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326366 s, 12.6 MB/s 00:19:24.477 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.477 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:24.477 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.477 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:24.477 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:24.477 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:24.477 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:24.477 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:24.477 /dev/nbd1 00:19:24.744 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:24.744 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:24.744 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:24.744 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:24.744 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:24.744 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:24.744 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:24.744 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:24.744 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:24.744 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:24.744 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:24.744 1+0 records in 00:19:24.744 1+0 records out 00:19:24.744 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458609 s, 8.9 MB/s 00:19:24.744 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.744 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:24.744 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.744 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:24.744 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:24.744 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:24.744 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:24.744 04:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:24.744 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:24.744 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:24.744 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:24.744 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:24.744 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:24.744 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:24.744 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:25.023 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:25.023 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:25.023 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:25.023 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:25.023 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:25.023 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:25.023 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:25.023 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:25.023 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:25.023 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.283 [2024-12-06 04:10:18.453078] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:25.283 [2024-12-06 04:10:18.453217] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.283 [2024-12-06 04:10:18.453247] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:25.283 [2024-12-06 04:10:18.453256] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.283 [2024-12-06 04:10:18.455562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.283 [2024-12-06 04:10:18.455601] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:25.283 [2024-12-06 04:10:18.455694] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:25.283 [2024-12-06 04:10:18.455742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:25.283 [2024-12-06 04:10:18.455925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:25.283 spare 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.283 [2024-12-06 04:10:18.555823] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:25.283 [2024-12-06 04:10:18.555853] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:25.283 [2024-12-06 04:10:18.556139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:25.283 [2024-12-06 04:10:18.556327] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:25.283 [2024-12-06 04:10:18.556344] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:25.283 [2024-12-06 04:10:18.556537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.283 "name": "raid_bdev1", 00:19:25.283 "uuid": "a94e3b71-9c17-4bcb-93fd-288d9e1a7c91", 00:19:25.283 "strip_size_kb": 0, 00:19:25.283 "state": "online", 00:19:25.283 "raid_level": "raid1", 00:19:25.283 "superblock": true, 00:19:25.283 "num_base_bdevs": 2, 00:19:25.283 "num_base_bdevs_discovered": 2, 00:19:25.283 "num_base_bdevs_operational": 2, 00:19:25.283 "base_bdevs_list": [ 00:19:25.283 { 00:19:25.283 "name": "spare", 00:19:25.283 "uuid": "96bf506b-74be-5184-ada8-f6d2d8515dbd", 00:19:25.283 "is_configured": true, 00:19:25.283 "data_offset": 256, 00:19:25.283 "data_size": 7936 00:19:25.283 }, 00:19:25.283 { 00:19:25.283 "name": "BaseBdev2", 00:19:25.283 "uuid": "1d52e564-03e0-540f-b459-0f4a9ae667f6", 00:19:25.283 "is_configured": true, 00:19:25.283 "data_offset": 256, 00:19:25.283 "data_size": 7936 00:19:25.283 } 00:19:25.283 ] 00:19:25.283 }' 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.283 04:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.853 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:25.853 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.853 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:25.853 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:25.854 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.854 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.854 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.854 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.854 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.854 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.854 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:25.854 "name": "raid_bdev1", 00:19:25.854 "uuid": "a94e3b71-9c17-4bcb-93fd-288d9e1a7c91", 00:19:25.854 "strip_size_kb": 0, 00:19:25.854 "state": "online", 00:19:25.854 "raid_level": "raid1", 00:19:25.854 "superblock": true, 00:19:25.854 "num_base_bdevs": 2, 00:19:25.854 "num_base_bdevs_discovered": 2, 00:19:25.854 "num_base_bdevs_operational": 2, 00:19:25.854 "base_bdevs_list": [ 00:19:25.854 { 00:19:25.854 "name": "spare", 00:19:25.854 "uuid": "96bf506b-74be-5184-ada8-f6d2d8515dbd", 00:19:25.854 "is_configured": true, 00:19:25.854 "data_offset": 256, 00:19:25.854 "data_size": 7936 00:19:25.854 }, 00:19:25.854 { 00:19:25.854 "name": "BaseBdev2", 00:19:25.854 "uuid": "1d52e564-03e0-540f-b459-0f4a9ae667f6", 00:19:25.854 "is_configured": true, 00:19:25.854 "data_offset": 256, 00:19:25.854 "data_size": 7936 00:19:25.854 } 00:19:25.854 ] 00:19:25.854 }' 00:19:25.854 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:25.854 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:25.854 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:25.854 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:25.854 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.854 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:25.854 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.854 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.854 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.112 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:26.112 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:26.112 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.112 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.112 [2024-12-06 04:10:19.211946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:26.112 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.112 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:26.112 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:26.112 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:26.112 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:26.112 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:26.112 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:26.112 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.112 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.112 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.112 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.112 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.112 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.112 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.112 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.112 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.112 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.112 "name": "raid_bdev1", 00:19:26.112 "uuid": "a94e3b71-9c17-4bcb-93fd-288d9e1a7c91", 00:19:26.112 "strip_size_kb": 0, 00:19:26.112 "state": "online", 00:19:26.112 "raid_level": "raid1", 00:19:26.112 "superblock": true, 00:19:26.112 "num_base_bdevs": 2, 00:19:26.112 "num_base_bdevs_discovered": 1, 00:19:26.112 "num_base_bdevs_operational": 1, 00:19:26.112 "base_bdevs_list": [ 00:19:26.112 { 00:19:26.112 "name": null, 00:19:26.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.112 "is_configured": false, 00:19:26.112 "data_offset": 0, 00:19:26.112 "data_size": 7936 00:19:26.112 }, 00:19:26.112 { 00:19:26.112 "name": "BaseBdev2", 00:19:26.112 "uuid": "1d52e564-03e0-540f-b459-0f4a9ae667f6", 00:19:26.112 "is_configured": true, 00:19:26.112 "data_offset": 256, 00:19:26.112 "data_size": 7936 00:19:26.112 } 00:19:26.112 ] 00:19:26.112 }' 00:19:26.112 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.112 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.370 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:26.370 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.370 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.370 [2024-12-06 04:10:19.639290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:26.370 [2024-12-06 04:10:19.639539] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:26.370 [2024-12-06 04:10:19.639605] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:26.370 [2024-12-06 04:10:19.639665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:26.370 [2024-12-06 04:10:19.655414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:26.370 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.370 04:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:26.370 [2024-12-06 04:10:19.657272] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.748 "name": "raid_bdev1", 00:19:27.748 "uuid": "a94e3b71-9c17-4bcb-93fd-288d9e1a7c91", 00:19:27.748 "strip_size_kb": 0, 00:19:27.748 "state": "online", 00:19:27.748 "raid_level": "raid1", 00:19:27.748 "superblock": true, 00:19:27.748 "num_base_bdevs": 2, 00:19:27.748 "num_base_bdevs_discovered": 2, 00:19:27.748 "num_base_bdevs_operational": 2, 00:19:27.748 "process": { 00:19:27.748 "type": "rebuild", 00:19:27.748 "target": "spare", 00:19:27.748 "progress": { 00:19:27.748 "blocks": 2560, 00:19:27.748 "percent": 32 00:19:27.748 } 00:19:27.748 }, 00:19:27.748 "base_bdevs_list": [ 00:19:27.748 { 00:19:27.748 "name": "spare", 00:19:27.748 "uuid": "96bf506b-74be-5184-ada8-f6d2d8515dbd", 00:19:27.748 "is_configured": true, 00:19:27.748 "data_offset": 256, 00:19:27.748 "data_size": 7936 00:19:27.748 }, 00:19:27.748 { 00:19:27.748 "name": "BaseBdev2", 00:19:27.748 "uuid": "1d52e564-03e0-540f-b459-0f4a9ae667f6", 00:19:27.748 "is_configured": true, 00:19:27.748 "data_offset": 256, 00:19:27.748 "data_size": 7936 00:19:27.748 } 00:19:27.748 ] 00:19:27.748 }' 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.748 [2024-12-06 04:10:20.801449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:27.748 [2024-12-06 04:10:20.862378] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:27.748 [2024-12-06 04:10:20.862576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.748 [2024-12-06 04:10:20.862593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:27.748 [2024-12-06 04:10:20.862619] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.748 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.749 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.749 "name": "raid_bdev1", 00:19:27.749 "uuid": "a94e3b71-9c17-4bcb-93fd-288d9e1a7c91", 00:19:27.749 "strip_size_kb": 0, 00:19:27.749 "state": "online", 00:19:27.749 "raid_level": "raid1", 00:19:27.749 "superblock": true, 00:19:27.749 "num_base_bdevs": 2, 00:19:27.749 "num_base_bdevs_discovered": 1, 00:19:27.749 "num_base_bdevs_operational": 1, 00:19:27.749 "base_bdevs_list": [ 00:19:27.749 { 00:19:27.749 "name": null, 00:19:27.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.749 "is_configured": false, 00:19:27.749 "data_offset": 0, 00:19:27.749 "data_size": 7936 00:19:27.749 }, 00:19:27.749 { 00:19:27.749 "name": "BaseBdev2", 00:19:27.749 "uuid": "1d52e564-03e0-540f-b459-0f4a9ae667f6", 00:19:27.749 "is_configured": true, 00:19:27.749 "data_offset": 256, 00:19:27.749 "data_size": 7936 00:19:27.749 } 00:19:27.749 ] 00:19:27.749 }' 00:19:27.749 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.749 04:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:28.319 04:10:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:28.319 04:10:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.319 04:10:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:28.319 [2024-12-06 04:10:21.372582] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:28.319 [2024-12-06 04:10:21.372714] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.319 [2024-12-06 04:10:21.372767] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:28.319 [2024-12-06 04:10:21.372803] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.319 [2024-12-06 04:10:21.373329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.319 [2024-12-06 04:10:21.373413] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:28.319 [2024-12-06 04:10:21.373534] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:28.319 [2024-12-06 04:10:21.373581] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:28.319 [2024-12-06 04:10:21.373628] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:28.319 [2024-12-06 04:10:21.373717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:28.319 [2024-12-06 04:10:21.389939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:28.319 spare 00:19:28.319 04:10:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.319 04:10:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:28.319 [2024-12-06 04:10:21.391869] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:29.258 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:29.258 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.258 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:29.258 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:29.258 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.258 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.258 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.258 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.259 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.259 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.259 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.259 "name": "raid_bdev1", 00:19:29.259 "uuid": "a94e3b71-9c17-4bcb-93fd-288d9e1a7c91", 00:19:29.259 "strip_size_kb": 0, 00:19:29.259 "state": "online", 00:19:29.259 "raid_level": "raid1", 00:19:29.259 "superblock": true, 00:19:29.259 "num_base_bdevs": 2, 00:19:29.259 "num_base_bdevs_discovered": 2, 00:19:29.259 "num_base_bdevs_operational": 2, 00:19:29.259 "process": { 00:19:29.259 "type": "rebuild", 00:19:29.259 "target": "spare", 00:19:29.259 "progress": { 00:19:29.259 "blocks": 2560, 00:19:29.259 "percent": 32 00:19:29.259 } 00:19:29.259 }, 00:19:29.259 "base_bdevs_list": [ 00:19:29.259 { 00:19:29.259 "name": "spare", 00:19:29.259 "uuid": "96bf506b-74be-5184-ada8-f6d2d8515dbd", 00:19:29.259 "is_configured": true, 00:19:29.259 "data_offset": 256, 00:19:29.259 "data_size": 7936 00:19:29.259 }, 00:19:29.259 { 00:19:29.259 "name": "BaseBdev2", 00:19:29.259 "uuid": "1d52e564-03e0-540f-b459-0f4a9ae667f6", 00:19:29.259 "is_configured": true, 00:19:29.259 "data_offset": 256, 00:19:29.259 "data_size": 7936 00:19:29.259 } 00:19:29.259 ] 00:19:29.259 }' 00:19:29.259 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:29.259 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:29.259 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:29.259 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:29.259 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:29.259 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.259 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.259 [2024-12-06 04:10:22.555003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:29.259 [2024-12-06 04:10:22.597217] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:29.259 [2024-12-06 04:10:22.597335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.259 [2024-12-06 04:10:22.597374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:29.259 [2024-12-06 04:10:22.597396] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:29.519 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.519 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:29.519 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.519 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.519 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.519 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.519 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:29.519 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.519 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.519 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.519 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.519 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.519 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.519 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.519 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.519 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.519 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.519 "name": "raid_bdev1", 00:19:29.519 "uuid": "a94e3b71-9c17-4bcb-93fd-288d9e1a7c91", 00:19:29.519 "strip_size_kb": 0, 00:19:29.519 "state": "online", 00:19:29.519 "raid_level": "raid1", 00:19:29.519 "superblock": true, 00:19:29.519 "num_base_bdevs": 2, 00:19:29.519 "num_base_bdevs_discovered": 1, 00:19:29.519 "num_base_bdevs_operational": 1, 00:19:29.519 "base_bdevs_list": [ 00:19:29.519 { 00:19:29.519 "name": null, 00:19:29.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.519 "is_configured": false, 00:19:29.519 "data_offset": 0, 00:19:29.519 "data_size": 7936 00:19:29.519 }, 00:19:29.519 { 00:19:29.519 "name": "BaseBdev2", 00:19:29.519 "uuid": "1d52e564-03e0-540f-b459-0f4a9ae667f6", 00:19:29.519 "is_configured": true, 00:19:29.519 "data_offset": 256, 00:19:29.519 "data_size": 7936 00:19:29.519 } 00:19:29.519 ] 00:19:29.519 }' 00:19:29.519 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.519 04:10:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.779 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:29.779 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.779 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:29.779 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:29.779 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.779 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.779 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.779 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.779 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.779 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.779 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.779 "name": "raid_bdev1", 00:19:29.779 "uuid": "a94e3b71-9c17-4bcb-93fd-288d9e1a7c91", 00:19:29.779 "strip_size_kb": 0, 00:19:29.779 "state": "online", 00:19:29.779 "raid_level": "raid1", 00:19:29.779 "superblock": true, 00:19:29.779 "num_base_bdevs": 2, 00:19:29.779 "num_base_bdevs_discovered": 1, 00:19:29.779 "num_base_bdevs_operational": 1, 00:19:29.779 "base_bdevs_list": [ 00:19:29.779 { 00:19:29.779 "name": null, 00:19:29.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.779 "is_configured": false, 00:19:29.779 "data_offset": 0, 00:19:29.779 "data_size": 7936 00:19:29.779 }, 00:19:29.779 { 00:19:29.779 "name": "BaseBdev2", 00:19:29.779 "uuid": "1d52e564-03e0-540f-b459-0f4a9ae667f6", 00:19:29.779 "is_configured": true, 00:19:29.779 "data_offset": 256, 00:19:29.779 "data_size": 7936 00:19:29.779 } 00:19:29.779 ] 00:19:29.779 }' 00:19:29.779 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:30.039 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:30.039 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:30.039 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:30.039 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:30.039 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.039 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:30.039 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.039 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:30.039 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.039 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:30.039 [2024-12-06 04:10:23.239001] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:30.039 [2024-12-06 04:10:23.239069] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.039 [2024-12-06 04:10:23.239096] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:30.039 [2024-12-06 04:10:23.239115] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.039 [2024-12-06 04:10:23.239532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.039 [2024-12-06 04:10:23.239549] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:30.039 [2024-12-06 04:10:23.239622] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:30.039 [2024-12-06 04:10:23.239635] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:30.039 [2024-12-06 04:10:23.239646] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:30.039 [2024-12-06 04:10:23.239656] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:30.039 BaseBdev1 00:19:30.039 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.039 04:10:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:30.977 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:30.977 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:30.977 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.977 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:30.977 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:30.977 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:30.977 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.977 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.977 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.977 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.977 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.977 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.977 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.977 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:30.977 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.977 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.977 "name": "raid_bdev1", 00:19:30.977 "uuid": "a94e3b71-9c17-4bcb-93fd-288d9e1a7c91", 00:19:30.977 "strip_size_kb": 0, 00:19:30.977 "state": "online", 00:19:30.977 "raid_level": "raid1", 00:19:30.977 "superblock": true, 00:19:30.977 "num_base_bdevs": 2, 00:19:30.977 "num_base_bdevs_discovered": 1, 00:19:30.977 "num_base_bdevs_operational": 1, 00:19:30.977 "base_bdevs_list": [ 00:19:30.977 { 00:19:30.977 "name": null, 00:19:30.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.977 "is_configured": false, 00:19:30.977 "data_offset": 0, 00:19:30.977 "data_size": 7936 00:19:30.977 }, 00:19:30.977 { 00:19:30.977 "name": "BaseBdev2", 00:19:30.977 "uuid": "1d52e564-03e0-540f-b459-0f4a9ae667f6", 00:19:30.977 "is_configured": true, 00:19:30.977 "data_offset": 256, 00:19:30.977 "data_size": 7936 00:19:30.977 } 00:19:30.977 ] 00:19:30.977 }' 00:19:30.977 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.977 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.544 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:31.544 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:31.544 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:31.544 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:31.544 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:31.544 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.544 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.544 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.544 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.544 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.544 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:31.544 "name": "raid_bdev1", 00:19:31.544 "uuid": "a94e3b71-9c17-4bcb-93fd-288d9e1a7c91", 00:19:31.544 "strip_size_kb": 0, 00:19:31.544 "state": "online", 00:19:31.544 "raid_level": "raid1", 00:19:31.544 "superblock": true, 00:19:31.544 "num_base_bdevs": 2, 00:19:31.544 "num_base_bdevs_discovered": 1, 00:19:31.544 "num_base_bdevs_operational": 1, 00:19:31.544 "base_bdevs_list": [ 00:19:31.544 { 00:19:31.544 "name": null, 00:19:31.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.544 "is_configured": false, 00:19:31.544 "data_offset": 0, 00:19:31.544 "data_size": 7936 00:19:31.544 }, 00:19:31.544 { 00:19:31.544 "name": "BaseBdev2", 00:19:31.544 "uuid": "1d52e564-03e0-540f-b459-0f4a9ae667f6", 00:19:31.544 "is_configured": true, 00:19:31.544 "data_offset": 256, 00:19:31.544 "data_size": 7936 00:19:31.544 } 00:19:31.544 ] 00:19:31.544 }' 00:19:31.544 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:31.544 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:31.544 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:31.803 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:31.803 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:31.803 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:19:31.803 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:31.803 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:31.803 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.803 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:31.803 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.803 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:31.803 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.803 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.803 [2024-12-06 04:10:24.912350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:31.803 [2024-12-06 04:10:24.912525] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:31.803 [2024-12-06 04:10:24.912544] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:31.803 request: 00:19:31.803 { 00:19:31.803 "base_bdev": "BaseBdev1", 00:19:31.803 "raid_bdev": "raid_bdev1", 00:19:31.803 "method": "bdev_raid_add_base_bdev", 00:19:31.803 "req_id": 1 00:19:31.803 } 00:19:31.803 Got JSON-RPC error response 00:19:31.803 response: 00:19:31.803 { 00:19:31.803 "code": -22, 00:19:31.803 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:31.803 } 00:19:31.803 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:31.803 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:19:31.803 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:31.803 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:31.803 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:31.803 04:10:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:32.747 04:10:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:32.747 04:10:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:32.747 04:10:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.747 04:10:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.747 04:10:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.747 04:10:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:32.747 04:10:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.747 04:10:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.747 04:10:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.747 04:10:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.747 04:10:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.747 04:10:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.747 04:10:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.747 04:10:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:32.747 04:10:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.747 04:10:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.747 "name": "raid_bdev1", 00:19:32.747 "uuid": "a94e3b71-9c17-4bcb-93fd-288d9e1a7c91", 00:19:32.747 "strip_size_kb": 0, 00:19:32.747 "state": "online", 00:19:32.747 "raid_level": "raid1", 00:19:32.747 "superblock": true, 00:19:32.747 "num_base_bdevs": 2, 00:19:32.747 "num_base_bdevs_discovered": 1, 00:19:32.747 "num_base_bdevs_operational": 1, 00:19:32.747 "base_bdevs_list": [ 00:19:32.747 { 00:19:32.747 "name": null, 00:19:32.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.747 "is_configured": false, 00:19:32.747 "data_offset": 0, 00:19:32.747 "data_size": 7936 00:19:32.747 }, 00:19:32.747 { 00:19:32.747 "name": "BaseBdev2", 00:19:32.747 "uuid": "1d52e564-03e0-540f-b459-0f4a9ae667f6", 00:19:32.747 "is_configured": true, 00:19:32.747 "data_offset": 256, 00:19:32.747 "data_size": 7936 00:19:32.747 } 00:19:32.747 ] 00:19:32.747 }' 00:19:32.747 04:10:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.747 04:10:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:33.329 "name": "raid_bdev1", 00:19:33.329 "uuid": "a94e3b71-9c17-4bcb-93fd-288d9e1a7c91", 00:19:33.329 "strip_size_kb": 0, 00:19:33.329 "state": "online", 00:19:33.329 "raid_level": "raid1", 00:19:33.329 "superblock": true, 00:19:33.329 "num_base_bdevs": 2, 00:19:33.329 "num_base_bdevs_discovered": 1, 00:19:33.329 "num_base_bdevs_operational": 1, 00:19:33.329 "base_bdevs_list": [ 00:19:33.329 { 00:19:33.329 "name": null, 00:19:33.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.329 "is_configured": false, 00:19:33.329 "data_offset": 0, 00:19:33.329 "data_size": 7936 00:19:33.329 }, 00:19:33.329 { 00:19:33.329 "name": "BaseBdev2", 00:19:33.329 "uuid": "1d52e564-03e0-540f-b459-0f4a9ae667f6", 00:19:33.329 "is_configured": true, 00:19:33.329 "data_offset": 256, 00:19:33.329 "data_size": 7936 00:19:33.329 } 00:19:33.329 ] 00:19:33.329 }' 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86777 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86777 ']' 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86777 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86777 00:19:33.329 killing process with pid 86777 00:19:33.329 Received shutdown signal, test time was about 60.000000 seconds 00:19:33.329 00:19:33.329 Latency(us) 00:19:33.329 [2024-12-06T04:10:26.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.329 [2024-12-06T04:10:26.683Z] =================================================================================================================== 00:19:33.329 [2024-12-06T04:10:26.683Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86777' 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86777 00:19:33.329 [2024-12-06 04:10:26.574700] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:33.329 [2024-12-06 04:10:26.574819] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:33.329 04:10:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86777 00:19:33.329 [2024-12-06 04:10:26.574868] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:33.329 [2024-12-06 04:10:26.574879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:33.587 [2024-12-06 04:10:26.864097] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:34.963 04:10:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:19:34.963 00:19:34.963 real 0m19.841s 00:19:34.963 user 0m25.950s 00:19:34.963 sys 0m2.607s 00:19:34.963 04:10:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.963 ************************************ 00:19:34.963 END TEST raid_rebuild_test_sb_4k 00:19:34.963 ************************************ 00:19:34.963 04:10:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.963 04:10:27 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:19:34.963 04:10:27 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:19:34.963 04:10:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:34.963 04:10:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.963 04:10:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.963 ************************************ 00:19:34.963 START TEST raid_state_function_test_sb_md_separate 00:19:34.963 ************************************ 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87469 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87469' 00:19:34.963 Process raid pid: 87469 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87469 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87469 ']' 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.963 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.963 [2024-12-06 04:10:28.102618] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:19:34.963 [2024-12-06 04:10:28.102803] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.963 [2024-12-06 04:10:28.275090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.222 [2024-12-06 04:10:28.387904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.488 [2024-12-06 04:10:28.586158] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:35.488 [2024-12-06 04:10:28.586276] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.747 [2024-12-06 04:10:28.935776] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:35.747 [2024-12-06 04:10:28.935832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:35.747 [2024-12-06 04:10:28.935842] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:35.747 [2024-12-06 04:10:28.935852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.747 "name": "Existed_Raid", 00:19:35.747 "uuid": "c96cefa8-64e6-4c7d-9ad1-89c9760ef95a", 00:19:35.747 "strip_size_kb": 0, 00:19:35.747 "state": "configuring", 00:19:35.747 "raid_level": "raid1", 00:19:35.747 "superblock": true, 00:19:35.747 "num_base_bdevs": 2, 00:19:35.747 "num_base_bdevs_discovered": 0, 00:19:35.747 "num_base_bdevs_operational": 2, 00:19:35.747 "base_bdevs_list": [ 00:19:35.747 { 00:19:35.747 "name": "BaseBdev1", 00:19:35.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.747 "is_configured": false, 00:19:35.747 "data_offset": 0, 00:19:35.747 "data_size": 0 00:19:35.747 }, 00:19:35.747 { 00:19:35.747 "name": "BaseBdev2", 00:19:35.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.747 "is_configured": false, 00:19:35.747 "data_offset": 0, 00:19:35.747 "data_size": 0 00:19:35.747 } 00:19:35.747 ] 00:19:35.747 }' 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.747 04:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.317 [2024-12-06 04:10:29.410930] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:36.317 [2024-12-06 04:10:29.410973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.317 [2024-12-06 04:10:29.422908] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:36.317 [2024-12-06 04:10:29.422962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:36.317 [2024-12-06 04:10:29.422971] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:36.317 [2024-12-06 04:10:29.422982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.317 [2024-12-06 04:10:29.470612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:36.317 BaseBdev1 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.317 [ 00:19:36.317 { 00:19:36.317 "name": "BaseBdev1", 00:19:36.317 "aliases": [ 00:19:36.317 "726d77f2-2281-4747-9c33-a3bab3eea9c1" 00:19:36.317 ], 00:19:36.317 "product_name": "Malloc disk", 00:19:36.317 "block_size": 4096, 00:19:36.317 "num_blocks": 8192, 00:19:36.317 "uuid": "726d77f2-2281-4747-9c33-a3bab3eea9c1", 00:19:36.317 "md_size": 32, 00:19:36.317 "md_interleave": false, 00:19:36.317 "dif_type": 0, 00:19:36.317 "assigned_rate_limits": { 00:19:36.317 "rw_ios_per_sec": 0, 00:19:36.317 "rw_mbytes_per_sec": 0, 00:19:36.317 "r_mbytes_per_sec": 0, 00:19:36.317 "w_mbytes_per_sec": 0 00:19:36.317 }, 00:19:36.317 "claimed": true, 00:19:36.317 "claim_type": "exclusive_write", 00:19:36.317 "zoned": false, 00:19:36.317 "supported_io_types": { 00:19:36.317 "read": true, 00:19:36.317 "write": true, 00:19:36.317 "unmap": true, 00:19:36.317 "flush": true, 00:19:36.317 "reset": true, 00:19:36.317 "nvme_admin": false, 00:19:36.317 "nvme_io": false, 00:19:36.317 "nvme_io_md": false, 00:19:36.317 "write_zeroes": true, 00:19:36.317 "zcopy": true, 00:19:36.317 "get_zone_info": false, 00:19:36.317 "zone_management": false, 00:19:36.317 "zone_append": false, 00:19:36.317 "compare": false, 00:19:36.317 "compare_and_write": false, 00:19:36.317 "abort": true, 00:19:36.317 "seek_hole": false, 00:19:36.317 "seek_data": false, 00:19:36.317 "copy": true, 00:19:36.317 "nvme_iov_md": false 00:19:36.317 }, 00:19:36.317 "memory_domains": [ 00:19:36.317 { 00:19:36.317 "dma_device_id": "system", 00:19:36.317 "dma_device_type": 1 00:19:36.317 }, 00:19:36.317 { 00:19:36.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.317 "dma_device_type": 2 00:19:36.317 } 00:19:36.317 ], 00:19:36.317 "driver_specific": {} 00:19:36.317 } 00:19:36.317 ] 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.317 "name": "Existed_Raid", 00:19:36.317 "uuid": "432f18cb-546d-48d6-9b17-be162c1a794d", 00:19:36.317 "strip_size_kb": 0, 00:19:36.317 "state": "configuring", 00:19:36.317 "raid_level": "raid1", 00:19:36.317 "superblock": true, 00:19:36.317 "num_base_bdevs": 2, 00:19:36.317 "num_base_bdevs_discovered": 1, 00:19:36.317 "num_base_bdevs_operational": 2, 00:19:36.317 "base_bdevs_list": [ 00:19:36.317 { 00:19:36.317 "name": "BaseBdev1", 00:19:36.317 "uuid": "726d77f2-2281-4747-9c33-a3bab3eea9c1", 00:19:36.317 "is_configured": true, 00:19:36.317 "data_offset": 256, 00:19:36.317 "data_size": 7936 00:19:36.317 }, 00:19:36.317 { 00:19:36.317 "name": "BaseBdev2", 00:19:36.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.317 "is_configured": false, 00:19:36.317 "data_offset": 0, 00:19:36.317 "data_size": 0 00:19:36.317 } 00:19:36.317 ] 00:19:36.317 }' 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.317 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.886 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:36.886 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.886 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.886 [2024-12-06 04:10:29.969857] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:36.886 [2024-12-06 04:10:29.969961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:36.886 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.886 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:36.886 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.886 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.886 [2024-12-06 04:10:29.977871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:36.886 [2024-12-06 04:10:29.979647] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:36.886 [2024-12-06 04:10:29.979729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:36.886 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.886 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:36.886 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:36.886 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:36.886 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:36.886 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:36.887 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:36.887 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:36.887 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:36.887 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.887 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.887 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.887 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.887 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.887 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.887 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.887 04:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.887 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.887 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.887 "name": "Existed_Raid", 00:19:36.887 "uuid": "50568b70-4744-4ff3-a144-3b01204f427c", 00:19:36.887 "strip_size_kb": 0, 00:19:36.887 "state": "configuring", 00:19:36.887 "raid_level": "raid1", 00:19:36.887 "superblock": true, 00:19:36.887 "num_base_bdevs": 2, 00:19:36.887 "num_base_bdevs_discovered": 1, 00:19:36.887 "num_base_bdevs_operational": 2, 00:19:36.887 "base_bdevs_list": [ 00:19:36.887 { 00:19:36.887 "name": "BaseBdev1", 00:19:36.887 "uuid": "726d77f2-2281-4747-9c33-a3bab3eea9c1", 00:19:36.887 "is_configured": true, 00:19:36.887 "data_offset": 256, 00:19:36.887 "data_size": 7936 00:19:36.887 }, 00:19:36.887 { 00:19:36.887 "name": "BaseBdev2", 00:19:36.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.887 "is_configured": false, 00:19:36.887 "data_offset": 0, 00:19:36.887 "data_size": 0 00:19:36.887 } 00:19:36.887 ] 00:19:36.887 }' 00:19:36.887 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.887 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.146 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:19:37.146 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.146 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.146 [2024-12-06 04:10:30.485639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:37.146 [2024-12-06 04:10:30.485955] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:37.146 [2024-12-06 04:10:30.486014] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:37.146 [2024-12-06 04:10:30.486132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:37.146 [2024-12-06 04:10:30.486305] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:37.146 [2024-12-06 04:10:30.486349] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:37.146 [2024-12-06 04:10:30.486478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.146 BaseBdev2 00:19:37.146 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.146 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:37.146 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:37.146 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:37.146 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:19:37.146 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:37.146 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:37.146 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:37.146 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.146 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.406 [ 00:19:37.406 { 00:19:37.406 "name": "BaseBdev2", 00:19:37.406 "aliases": [ 00:19:37.406 "3ac44c42-d171-4993-8f6a-7acc09a7a3f3" 00:19:37.406 ], 00:19:37.406 "product_name": "Malloc disk", 00:19:37.406 "block_size": 4096, 00:19:37.406 "num_blocks": 8192, 00:19:37.406 "uuid": "3ac44c42-d171-4993-8f6a-7acc09a7a3f3", 00:19:37.406 "md_size": 32, 00:19:37.406 "md_interleave": false, 00:19:37.406 "dif_type": 0, 00:19:37.406 "assigned_rate_limits": { 00:19:37.406 "rw_ios_per_sec": 0, 00:19:37.406 "rw_mbytes_per_sec": 0, 00:19:37.406 "r_mbytes_per_sec": 0, 00:19:37.406 "w_mbytes_per_sec": 0 00:19:37.406 }, 00:19:37.406 "claimed": true, 00:19:37.406 "claim_type": "exclusive_write", 00:19:37.406 "zoned": false, 00:19:37.406 "supported_io_types": { 00:19:37.406 "read": true, 00:19:37.406 "write": true, 00:19:37.406 "unmap": true, 00:19:37.406 "flush": true, 00:19:37.406 "reset": true, 00:19:37.406 "nvme_admin": false, 00:19:37.406 "nvme_io": false, 00:19:37.406 "nvme_io_md": false, 00:19:37.406 "write_zeroes": true, 00:19:37.406 "zcopy": true, 00:19:37.406 "get_zone_info": false, 00:19:37.406 "zone_management": false, 00:19:37.406 "zone_append": false, 00:19:37.406 "compare": false, 00:19:37.406 "compare_and_write": false, 00:19:37.406 "abort": true, 00:19:37.406 "seek_hole": false, 00:19:37.406 "seek_data": false, 00:19:37.406 "copy": true, 00:19:37.406 "nvme_iov_md": false 00:19:37.406 }, 00:19:37.406 "memory_domains": [ 00:19:37.406 { 00:19:37.406 "dma_device_id": "system", 00:19:37.406 "dma_device_type": 1 00:19:37.406 }, 00:19:37.406 { 00:19:37.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.406 "dma_device_type": 2 00:19:37.406 } 00:19:37.406 ], 00:19:37.406 "driver_specific": {} 00:19:37.406 } 00:19:37.406 ] 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.406 "name": "Existed_Raid", 00:19:37.406 "uuid": "50568b70-4744-4ff3-a144-3b01204f427c", 00:19:37.406 "strip_size_kb": 0, 00:19:37.406 "state": "online", 00:19:37.406 "raid_level": "raid1", 00:19:37.406 "superblock": true, 00:19:37.406 "num_base_bdevs": 2, 00:19:37.406 "num_base_bdevs_discovered": 2, 00:19:37.406 "num_base_bdevs_operational": 2, 00:19:37.406 "base_bdevs_list": [ 00:19:37.406 { 00:19:37.406 "name": "BaseBdev1", 00:19:37.406 "uuid": "726d77f2-2281-4747-9c33-a3bab3eea9c1", 00:19:37.406 "is_configured": true, 00:19:37.406 "data_offset": 256, 00:19:37.406 "data_size": 7936 00:19:37.406 }, 00:19:37.406 { 00:19:37.406 "name": "BaseBdev2", 00:19:37.406 "uuid": "3ac44c42-d171-4993-8f6a-7acc09a7a3f3", 00:19:37.406 "is_configured": true, 00:19:37.406 "data_offset": 256, 00:19:37.406 "data_size": 7936 00:19:37.406 } 00:19:37.406 ] 00:19:37.406 }' 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.406 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.666 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:37.666 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:37.666 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:37.666 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:37.666 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:37.666 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:37.666 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:37.666 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:37.666 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.667 04:10:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.667 [2024-12-06 04:10:30.985197] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:37.667 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:37.927 "name": "Existed_Raid", 00:19:37.927 "aliases": [ 00:19:37.927 "50568b70-4744-4ff3-a144-3b01204f427c" 00:19:37.927 ], 00:19:37.927 "product_name": "Raid Volume", 00:19:37.927 "block_size": 4096, 00:19:37.927 "num_blocks": 7936, 00:19:37.927 "uuid": "50568b70-4744-4ff3-a144-3b01204f427c", 00:19:37.927 "md_size": 32, 00:19:37.927 "md_interleave": false, 00:19:37.927 "dif_type": 0, 00:19:37.927 "assigned_rate_limits": { 00:19:37.927 "rw_ios_per_sec": 0, 00:19:37.927 "rw_mbytes_per_sec": 0, 00:19:37.927 "r_mbytes_per_sec": 0, 00:19:37.927 "w_mbytes_per_sec": 0 00:19:37.927 }, 00:19:37.927 "claimed": false, 00:19:37.927 "zoned": false, 00:19:37.927 "supported_io_types": { 00:19:37.927 "read": true, 00:19:37.927 "write": true, 00:19:37.927 "unmap": false, 00:19:37.927 "flush": false, 00:19:37.927 "reset": true, 00:19:37.927 "nvme_admin": false, 00:19:37.927 "nvme_io": false, 00:19:37.927 "nvme_io_md": false, 00:19:37.927 "write_zeroes": true, 00:19:37.927 "zcopy": false, 00:19:37.927 "get_zone_info": false, 00:19:37.927 "zone_management": false, 00:19:37.927 "zone_append": false, 00:19:37.927 "compare": false, 00:19:37.927 "compare_and_write": false, 00:19:37.927 "abort": false, 00:19:37.927 "seek_hole": false, 00:19:37.927 "seek_data": false, 00:19:37.927 "copy": false, 00:19:37.927 "nvme_iov_md": false 00:19:37.927 }, 00:19:37.927 "memory_domains": [ 00:19:37.927 { 00:19:37.927 "dma_device_id": "system", 00:19:37.927 "dma_device_type": 1 00:19:37.927 }, 00:19:37.927 { 00:19:37.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.927 "dma_device_type": 2 00:19:37.927 }, 00:19:37.927 { 00:19:37.927 "dma_device_id": "system", 00:19:37.927 "dma_device_type": 1 00:19:37.927 }, 00:19:37.927 { 00:19:37.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.927 "dma_device_type": 2 00:19:37.927 } 00:19:37.927 ], 00:19:37.927 "driver_specific": { 00:19:37.927 "raid": { 00:19:37.927 "uuid": "50568b70-4744-4ff3-a144-3b01204f427c", 00:19:37.927 "strip_size_kb": 0, 00:19:37.927 "state": "online", 00:19:37.927 "raid_level": "raid1", 00:19:37.927 "superblock": true, 00:19:37.927 "num_base_bdevs": 2, 00:19:37.927 "num_base_bdevs_discovered": 2, 00:19:37.927 "num_base_bdevs_operational": 2, 00:19:37.927 "base_bdevs_list": [ 00:19:37.927 { 00:19:37.927 "name": "BaseBdev1", 00:19:37.927 "uuid": "726d77f2-2281-4747-9c33-a3bab3eea9c1", 00:19:37.927 "is_configured": true, 00:19:37.927 "data_offset": 256, 00:19:37.927 "data_size": 7936 00:19:37.927 }, 00:19:37.927 { 00:19:37.927 "name": "BaseBdev2", 00:19:37.927 "uuid": "3ac44c42-d171-4993-8f6a-7acc09a7a3f3", 00:19:37.927 "is_configured": true, 00:19:37.927 "data_offset": 256, 00:19:37.927 "data_size": 7936 00:19:37.927 } 00:19:37.927 ] 00:19:37.927 } 00:19:37.927 } 00:19:37.927 }' 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:37.927 BaseBdev2' 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.927 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.927 [2024-12-06 04:10:31.208519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:38.187 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.187 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:38.187 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:38.187 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:38.187 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:38.187 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:38.187 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:38.187 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:38.187 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.187 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:38.187 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:38.187 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:38.187 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.187 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.187 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.187 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.188 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.188 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.188 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.188 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.188 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.188 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.188 "name": "Existed_Raid", 00:19:38.188 "uuid": "50568b70-4744-4ff3-a144-3b01204f427c", 00:19:38.188 "strip_size_kb": 0, 00:19:38.188 "state": "online", 00:19:38.188 "raid_level": "raid1", 00:19:38.188 "superblock": true, 00:19:38.188 "num_base_bdevs": 2, 00:19:38.188 "num_base_bdevs_discovered": 1, 00:19:38.188 "num_base_bdevs_operational": 1, 00:19:38.188 "base_bdevs_list": [ 00:19:38.188 { 00:19:38.188 "name": null, 00:19:38.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.188 "is_configured": false, 00:19:38.188 "data_offset": 0, 00:19:38.188 "data_size": 7936 00:19:38.188 }, 00:19:38.188 { 00:19:38.188 "name": "BaseBdev2", 00:19:38.188 "uuid": "3ac44c42-d171-4993-8f6a-7acc09a7a3f3", 00:19:38.188 "is_configured": true, 00:19:38.188 "data_offset": 256, 00:19:38.188 "data_size": 7936 00:19:38.188 } 00:19:38.188 ] 00:19:38.188 }' 00:19:38.188 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.188 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.448 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:38.448 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:38.448 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.448 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.448 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:38.448 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.448 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.448 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:38.448 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:38.448 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:38.448 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.448 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.448 [2024-12-06 04:10:31.774636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:38.448 [2024-12-06 04:10:31.774741] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:38.709 [2024-12-06 04:10:31.876556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:38.709 [2024-12-06 04:10:31.876605] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:38.709 [2024-12-06 04:10:31.876624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:38.709 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.709 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:38.709 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:38.709 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:38.709 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.709 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.709 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.709 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.709 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:38.709 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:38.709 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:38.709 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87469 00:19:38.709 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87469 ']' 00:19:38.709 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87469 00:19:38.709 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:38.709 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.709 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87469 00:19:38.709 killing process with pid 87469 00:19:38.709 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:38.709 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:38.709 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87469' 00:19:38.709 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87469 00:19:38.709 [2024-12-06 04:10:31.971180] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:38.709 04:10:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87469 00:19:38.709 [2024-12-06 04:10:31.988003] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:40.091 04:10:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:19:40.091 00:19:40.091 real 0m5.059s 00:19:40.091 user 0m7.326s 00:19:40.091 sys 0m0.815s 00:19:40.091 04:10:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.091 04:10:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.091 ************************************ 00:19:40.091 END TEST raid_state_function_test_sb_md_separate 00:19:40.091 ************************************ 00:19:40.091 04:10:33 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:19:40.091 04:10:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:40.091 04:10:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.091 04:10:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:40.091 ************************************ 00:19:40.091 START TEST raid_superblock_test_md_separate 00:19:40.091 ************************************ 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87721 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87721 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87721 ']' 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.091 04:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.091 [2024-12-06 04:10:33.230212] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:19:40.091 [2024-12-06 04:10:33.230371] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87721 ] 00:19:40.091 [2024-12-06 04:10:33.411344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.350 [2024-12-06 04:10:33.522171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.609 [2024-12-06 04:10:33.719460] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:40.609 [2024-12-06 04:10:33.719595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:40.868 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.868 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:40.868 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:40.868 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:40.868 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:40.868 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:40.868 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:40.868 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:40.868 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:40.868 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:40.868 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:19:40.868 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.868 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.868 malloc1 00:19:40.868 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.868 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:40.868 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.868 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.868 [2024-12-06 04:10:34.104586] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:40.868 [2024-12-06 04:10:34.104714] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.868 [2024-12-06 04:10:34.104770] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:40.868 [2024-12-06 04:10:34.104799] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.868 [2024-12-06 04:10:34.106671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.868 [2024-12-06 04:10:34.106757] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:40.869 pt1 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.869 malloc2 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.869 [2024-12-06 04:10:34.166192] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:40.869 [2024-12-06 04:10:34.166289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.869 [2024-12-06 04:10:34.166341] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:40.869 [2024-12-06 04:10:34.166381] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.869 [2024-12-06 04:10:34.168216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.869 [2024-12-06 04:10:34.168282] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:40.869 pt2 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.869 [2024-12-06 04:10:34.178200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:40.869 [2024-12-06 04:10:34.179996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:40.869 [2024-12-06 04:10:34.180209] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:40.869 [2024-12-06 04:10:34.180225] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:40.869 [2024-12-06 04:10:34.180296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:40.869 [2024-12-06 04:10:34.180416] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:40.869 [2024-12-06 04:10:34.180427] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:40.869 [2024-12-06 04:10:34.180528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.869 "name": "raid_bdev1", 00:19:40.869 "uuid": "e09d5391-85be-470a-b362-bcca1c4fcbce", 00:19:40.869 "strip_size_kb": 0, 00:19:40.869 "state": "online", 00:19:40.869 "raid_level": "raid1", 00:19:40.869 "superblock": true, 00:19:40.869 "num_base_bdevs": 2, 00:19:40.869 "num_base_bdevs_discovered": 2, 00:19:40.869 "num_base_bdevs_operational": 2, 00:19:40.869 "base_bdevs_list": [ 00:19:40.869 { 00:19:40.869 "name": "pt1", 00:19:40.869 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:40.869 "is_configured": true, 00:19:40.869 "data_offset": 256, 00:19:40.869 "data_size": 7936 00:19:40.869 }, 00:19:40.869 { 00:19:40.869 "name": "pt2", 00:19:40.869 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:40.869 "is_configured": true, 00:19:40.869 "data_offset": 256, 00:19:40.869 "data_size": 7936 00:19:40.869 } 00:19:40.869 ] 00:19:40.869 }' 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.869 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:41.459 [2024-12-06 04:10:34.573797] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:41.459 "name": "raid_bdev1", 00:19:41.459 "aliases": [ 00:19:41.459 "e09d5391-85be-470a-b362-bcca1c4fcbce" 00:19:41.459 ], 00:19:41.459 "product_name": "Raid Volume", 00:19:41.459 "block_size": 4096, 00:19:41.459 "num_blocks": 7936, 00:19:41.459 "uuid": "e09d5391-85be-470a-b362-bcca1c4fcbce", 00:19:41.459 "md_size": 32, 00:19:41.459 "md_interleave": false, 00:19:41.459 "dif_type": 0, 00:19:41.459 "assigned_rate_limits": { 00:19:41.459 "rw_ios_per_sec": 0, 00:19:41.459 "rw_mbytes_per_sec": 0, 00:19:41.459 "r_mbytes_per_sec": 0, 00:19:41.459 "w_mbytes_per_sec": 0 00:19:41.459 }, 00:19:41.459 "claimed": false, 00:19:41.459 "zoned": false, 00:19:41.459 "supported_io_types": { 00:19:41.459 "read": true, 00:19:41.459 "write": true, 00:19:41.459 "unmap": false, 00:19:41.459 "flush": false, 00:19:41.459 "reset": true, 00:19:41.459 "nvme_admin": false, 00:19:41.459 "nvme_io": false, 00:19:41.459 "nvme_io_md": false, 00:19:41.459 "write_zeroes": true, 00:19:41.459 "zcopy": false, 00:19:41.459 "get_zone_info": false, 00:19:41.459 "zone_management": false, 00:19:41.459 "zone_append": false, 00:19:41.459 "compare": false, 00:19:41.459 "compare_and_write": false, 00:19:41.459 "abort": false, 00:19:41.459 "seek_hole": false, 00:19:41.459 "seek_data": false, 00:19:41.459 "copy": false, 00:19:41.459 "nvme_iov_md": false 00:19:41.459 }, 00:19:41.459 "memory_domains": [ 00:19:41.459 { 00:19:41.459 "dma_device_id": "system", 00:19:41.459 "dma_device_type": 1 00:19:41.459 }, 00:19:41.459 { 00:19:41.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.459 "dma_device_type": 2 00:19:41.459 }, 00:19:41.459 { 00:19:41.459 "dma_device_id": "system", 00:19:41.459 "dma_device_type": 1 00:19:41.459 }, 00:19:41.459 { 00:19:41.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.459 "dma_device_type": 2 00:19:41.459 } 00:19:41.459 ], 00:19:41.459 "driver_specific": { 00:19:41.459 "raid": { 00:19:41.459 "uuid": "e09d5391-85be-470a-b362-bcca1c4fcbce", 00:19:41.459 "strip_size_kb": 0, 00:19:41.459 "state": "online", 00:19:41.459 "raid_level": "raid1", 00:19:41.459 "superblock": true, 00:19:41.459 "num_base_bdevs": 2, 00:19:41.459 "num_base_bdevs_discovered": 2, 00:19:41.459 "num_base_bdevs_operational": 2, 00:19:41.459 "base_bdevs_list": [ 00:19:41.459 { 00:19:41.459 "name": "pt1", 00:19:41.459 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:41.459 "is_configured": true, 00:19:41.459 "data_offset": 256, 00:19:41.459 "data_size": 7936 00:19:41.459 }, 00:19:41.459 { 00:19:41.459 "name": "pt2", 00:19:41.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:41.459 "is_configured": true, 00:19:41.459 "data_offset": 256, 00:19:41.459 "data_size": 7936 00:19:41.459 } 00:19:41.459 ] 00:19:41.459 } 00:19:41.459 } 00:19:41.459 }' 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:41.459 pt2' 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:41.459 [2024-12-06 04:10:34.765415] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e09d5391-85be-470a-b362-bcca1c4fcbce 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z e09d5391-85be-470a-b362-bcca1c4fcbce ']' 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.459 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.719 [2024-12-06 04:10:34.813070] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:41.719 [2024-12-06 04:10:34.813091] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:41.719 [2024-12-06 04:10:34.813165] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:41.719 [2024-12-06 04:10:34.813219] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:41.719 [2024-12-06 04:10:34.813230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.719 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.719 [2024-12-06 04:10:34.940874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:41.719 [2024-12-06 04:10:34.942761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:41.720 [2024-12-06 04:10:34.942890] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:41.720 [2024-12-06 04:10:34.942996] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:41.720 [2024-12-06 04:10:34.943058] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:41.720 [2024-12-06 04:10:34.943098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:41.720 request: 00:19:41.720 { 00:19:41.720 "name": "raid_bdev1", 00:19:41.720 "raid_level": "raid1", 00:19:41.720 "base_bdevs": [ 00:19:41.720 "malloc1", 00:19:41.720 "malloc2" 00:19:41.720 ], 00:19:41.720 "superblock": false, 00:19:41.720 "method": "bdev_raid_create", 00:19:41.720 "req_id": 1 00:19:41.720 } 00:19:41.720 Got JSON-RPC error response 00:19:41.720 response: 00:19:41.720 { 00:19:41.720 "code": -17, 00:19:41.720 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:41.720 } 00:19:41.720 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:41.720 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:19:41.720 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:41.720 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:41.720 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:41.720 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.720 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.720 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.720 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:41.720 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.720 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:41.720 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:41.720 04:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:41.720 04:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.720 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.720 [2024-12-06 04:10:35.004747] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:41.720 [2024-12-06 04:10:35.004804] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.720 [2024-12-06 04:10:35.004820] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:41.720 [2024-12-06 04:10:35.004832] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.720 [2024-12-06 04:10:35.006792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.720 [2024-12-06 04:10:35.006834] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:41.720 [2024-12-06 04:10:35.006885] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:41.720 [2024-12-06 04:10:35.006939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:41.720 pt1 00:19:41.720 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.720 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:41.720 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:41.720 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:41.720 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:41.720 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:41.720 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:41.720 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.720 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.720 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.720 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.720 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.720 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.720 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.720 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.720 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.720 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.720 "name": "raid_bdev1", 00:19:41.720 "uuid": "e09d5391-85be-470a-b362-bcca1c4fcbce", 00:19:41.720 "strip_size_kb": 0, 00:19:41.720 "state": "configuring", 00:19:41.720 "raid_level": "raid1", 00:19:41.720 "superblock": true, 00:19:41.720 "num_base_bdevs": 2, 00:19:41.720 "num_base_bdevs_discovered": 1, 00:19:41.720 "num_base_bdevs_operational": 2, 00:19:41.720 "base_bdevs_list": [ 00:19:41.720 { 00:19:41.720 "name": "pt1", 00:19:41.720 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:41.720 "is_configured": true, 00:19:41.720 "data_offset": 256, 00:19:41.720 "data_size": 7936 00:19:41.720 }, 00:19:41.720 { 00:19:41.720 "name": null, 00:19:41.720 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:41.720 "is_configured": false, 00:19:41.720 "data_offset": 256, 00:19:41.720 "data_size": 7936 00:19:41.720 } 00:19:41.720 ] 00:19:41.720 }' 00:19:41.720 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.720 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.289 [2024-12-06 04:10:35.416157] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:42.289 [2024-12-06 04:10:35.416327] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.289 [2024-12-06 04:10:35.416369] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:42.289 [2024-12-06 04:10:35.416399] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.289 [2024-12-06 04:10:35.416654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.289 [2024-12-06 04:10:35.416722] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:42.289 [2024-12-06 04:10:35.416803] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:42.289 [2024-12-06 04:10:35.416857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:42.289 [2024-12-06 04:10:35.417008] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:42.289 [2024-12-06 04:10:35.417057] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:42.289 [2024-12-06 04:10:35.417167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:42.289 [2024-12-06 04:10:35.417319] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:42.289 [2024-12-06 04:10:35.417355] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:42.289 [2024-12-06 04:10:35.417481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:42.289 pt2 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.289 "name": "raid_bdev1", 00:19:42.289 "uuid": "e09d5391-85be-470a-b362-bcca1c4fcbce", 00:19:42.289 "strip_size_kb": 0, 00:19:42.289 "state": "online", 00:19:42.289 "raid_level": "raid1", 00:19:42.289 "superblock": true, 00:19:42.289 "num_base_bdevs": 2, 00:19:42.289 "num_base_bdevs_discovered": 2, 00:19:42.289 "num_base_bdevs_operational": 2, 00:19:42.289 "base_bdevs_list": [ 00:19:42.289 { 00:19:42.289 "name": "pt1", 00:19:42.289 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:42.289 "is_configured": true, 00:19:42.289 "data_offset": 256, 00:19:42.289 "data_size": 7936 00:19:42.289 }, 00:19:42.289 { 00:19:42.289 "name": "pt2", 00:19:42.289 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:42.289 "is_configured": true, 00:19:42.289 "data_offset": 256, 00:19:42.289 "data_size": 7936 00:19:42.289 } 00:19:42.289 ] 00:19:42.289 }' 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.289 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.547 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:42.547 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:42.547 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:42.547 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:42.547 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:42.547 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:42.547 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:42.547 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:42.547 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.547 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.547 [2024-12-06 04:10:35.807739] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:42.547 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.547 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:42.547 "name": "raid_bdev1", 00:19:42.547 "aliases": [ 00:19:42.547 "e09d5391-85be-470a-b362-bcca1c4fcbce" 00:19:42.547 ], 00:19:42.547 "product_name": "Raid Volume", 00:19:42.547 "block_size": 4096, 00:19:42.547 "num_blocks": 7936, 00:19:42.547 "uuid": "e09d5391-85be-470a-b362-bcca1c4fcbce", 00:19:42.547 "md_size": 32, 00:19:42.547 "md_interleave": false, 00:19:42.547 "dif_type": 0, 00:19:42.547 "assigned_rate_limits": { 00:19:42.547 "rw_ios_per_sec": 0, 00:19:42.547 "rw_mbytes_per_sec": 0, 00:19:42.547 "r_mbytes_per_sec": 0, 00:19:42.547 "w_mbytes_per_sec": 0 00:19:42.547 }, 00:19:42.547 "claimed": false, 00:19:42.547 "zoned": false, 00:19:42.547 "supported_io_types": { 00:19:42.547 "read": true, 00:19:42.547 "write": true, 00:19:42.547 "unmap": false, 00:19:42.547 "flush": false, 00:19:42.547 "reset": true, 00:19:42.547 "nvme_admin": false, 00:19:42.547 "nvme_io": false, 00:19:42.547 "nvme_io_md": false, 00:19:42.547 "write_zeroes": true, 00:19:42.547 "zcopy": false, 00:19:42.547 "get_zone_info": false, 00:19:42.547 "zone_management": false, 00:19:42.547 "zone_append": false, 00:19:42.547 "compare": false, 00:19:42.547 "compare_and_write": false, 00:19:42.547 "abort": false, 00:19:42.547 "seek_hole": false, 00:19:42.547 "seek_data": false, 00:19:42.547 "copy": false, 00:19:42.547 "nvme_iov_md": false 00:19:42.547 }, 00:19:42.547 "memory_domains": [ 00:19:42.547 { 00:19:42.547 "dma_device_id": "system", 00:19:42.547 "dma_device_type": 1 00:19:42.547 }, 00:19:42.547 { 00:19:42.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.547 "dma_device_type": 2 00:19:42.547 }, 00:19:42.547 { 00:19:42.547 "dma_device_id": "system", 00:19:42.547 "dma_device_type": 1 00:19:42.547 }, 00:19:42.547 { 00:19:42.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.547 "dma_device_type": 2 00:19:42.547 } 00:19:42.547 ], 00:19:42.547 "driver_specific": { 00:19:42.547 "raid": { 00:19:42.547 "uuid": "e09d5391-85be-470a-b362-bcca1c4fcbce", 00:19:42.547 "strip_size_kb": 0, 00:19:42.547 "state": "online", 00:19:42.547 "raid_level": "raid1", 00:19:42.547 "superblock": true, 00:19:42.547 "num_base_bdevs": 2, 00:19:42.547 "num_base_bdevs_discovered": 2, 00:19:42.547 "num_base_bdevs_operational": 2, 00:19:42.547 "base_bdevs_list": [ 00:19:42.547 { 00:19:42.547 "name": "pt1", 00:19:42.547 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:42.547 "is_configured": true, 00:19:42.547 "data_offset": 256, 00:19:42.547 "data_size": 7936 00:19:42.547 }, 00:19:42.547 { 00:19:42.547 "name": "pt2", 00:19:42.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:42.547 "is_configured": true, 00:19:42.547 "data_offset": 256, 00:19:42.547 "data_size": 7936 00:19:42.547 } 00:19:42.547 ] 00:19:42.547 } 00:19:42.547 } 00:19:42.547 }' 00:19:42.547 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:42.547 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:42.547 pt2' 00:19:42.547 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:42.547 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:42.547 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:42.806 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:42.806 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:42.806 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.806 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.806 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.806 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:42.806 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:42.806 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:42.806 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:42.806 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.806 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.806 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:42.806 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.806 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:42.806 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:42.806 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:42.806 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.806 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.806 04:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:42.806 [2024-12-06 04:10:35.987460] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:42.806 04:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.806 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' e09d5391-85be-470a-b362-bcca1c4fcbce '!=' e09d5391-85be-470a-b362-bcca1c4fcbce ']' 00:19:42.806 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:42.806 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:42.806 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:42.806 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:42.806 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.806 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.806 [2024-12-06 04:10:36.015159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:42.806 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.806 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:42.806 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:42.806 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:42.806 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:42.806 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:42.807 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:42.807 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.807 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.807 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.807 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.807 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.807 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.807 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.807 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.807 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.807 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.807 "name": "raid_bdev1", 00:19:42.807 "uuid": "e09d5391-85be-470a-b362-bcca1c4fcbce", 00:19:42.807 "strip_size_kb": 0, 00:19:42.807 "state": "online", 00:19:42.807 "raid_level": "raid1", 00:19:42.807 "superblock": true, 00:19:42.807 "num_base_bdevs": 2, 00:19:42.807 "num_base_bdevs_discovered": 1, 00:19:42.807 "num_base_bdevs_operational": 1, 00:19:42.807 "base_bdevs_list": [ 00:19:42.807 { 00:19:42.807 "name": null, 00:19:42.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.807 "is_configured": false, 00:19:42.807 "data_offset": 0, 00:19:42.807 "data_size": 7936 00:19:42.807 }, 00:19:42.807 { 00:19:42.807 "name": "pt2", 00:19:42.807 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:42.807 "is_configured": true, 00:19:42.807 "data_offset": 256, 00:19:42.807 "data_size": 7936 00:19:42.807 } 00:19:42.807 ] 00:19:42.807 }' 00:19:42.807 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.807 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.375 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:43.375 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.375 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.375 [2024-12-06 04:10:36.426397] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:43.375 [2024-12-06 04:10:36.426474] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:43.375 [2024-12-06 04:10:36.426566] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:43.375 [2024-12-06 04:10:36.426628] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:43.375 [2024-12-06 04:10:36.426703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:43.375 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.375 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:43.375 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.375 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.375 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.376 [2024-12-06 04:10:36.478297] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:43.376 [2024-12-06 04:10:36.478352] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.376 [2024-12-06 04:10:36.478367] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:43.376 [2024-12-06 04:10:36.478377] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.376 [2024-12-06 04:10:36.480351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.376 [2024-12-06 04:10:36.480392] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:43.376 [2024-12-06 04:10:36.480443] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:43.376 [2024-12-06 04:10:36.480498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:43.376 [2024-12-06 04:10:36.480597] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:43.376 [2024-12-06 04:10:36.480610] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:43.376 [2024-12-06 04:10:36.480695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:43.376 [2024-12-06 04:10:36.480823] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:43.376 [2024-12-06 04:10:36.480830] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:43.376 [2024-12-06 04:10:36.480948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.376 pt2 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.376 "name": "raid_bdev1", 00:19:43.376 "uuid": "e09d5391-85be-470a-b362-bcca1c4fcbce", 00:19:43.376 "strip_size_kb": 0, 00:19:43.376 "state": "online", 00:19:43.376 "raid_level": "raid1", 00:19:43.376 "superblock": true, 00:19:43.376 "num_base_bdevs": 2, 00:19:43.376 "num_base_bdevs_discovered": 1, 00:19:43.376 "num_base_bdevs_operational": 1, 00:19:43.376 "base_bdevs_list": [ 00:19:43.376 { 00:19:43.376 "name": null, 00:19:43.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.376 "is_configured": false, 00:19:43.376 "data_offset": 256, 00:19:43.376 "data_size": 7936 00:19:43.376 }, 00:19:43.376 { 00:19:43.376 "name": "pt2", 00:19:43.376 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:43.376 "is_configured": true, 00:19:43.376 "data_offset": 256, 00:19:43.376 "data_size": 7936 00:19:43.376 } 00:19:43.376 ] 00:19:43.376 }' 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.376 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.637 [2024-12-06 04:10:36.913592] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:43.637 [2024-12-06 04:10:36.913705] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:43.637 [2024-12-06 04:10:36.913806] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:43.637 [2024-12-06 04:10:36.913873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:43.637 [2024-12-06 04:10:36.913953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.637 [2024-12-06 04:10:36.961562] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:43.637 [2024-12-06 04:10:36.961699] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.637 [2024-12-06 04:10:36.961746] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:43.637 [2024-12-06 04:10:36.961777] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.637 [2024-12-06 04:10:36.963854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.637 [2024-12-06 04:10:36.963929] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:43.637 [2024-12-06 04:10:36.964038] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:43.637 [2024-12-06 04:10:36.964130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:43.637 [2024-12-06 04:10:36.964299] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:43.637 [2024-12-06 04:10:36.964351] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:43.637 [2024-12-06 04:10:36.964393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:43.637 [2024-12-06 04:10:36.964495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:43.637 [2024-12-06 04:10:36.964606] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:43.637 [2024-12-06 04:10:36.964626] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:43.637 [2024-12-06 04:10:36.964696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:43.637 [2024-12-06 04:10:36.964807] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:43.637 [2024-12-06 04:10:36.964818] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:43.637 [2024-12-06 04:10:36.964920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.637 pt1 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.637 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.897 04:10:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.897 04:10:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.897 "name": "raid_bdev1", 00:19:43.897 "uuid": "e09d5391-85be-470a-b362-bcca1c4fcbce", 00:19:43.897 "strip_size_kb": 0, 00:19:43.897 "state": "online", 00:19:43.897 "raid_level": "raid1", 00:19:43.897 "superblock": true, 00:19:43.897 "num_base_bdevs": 2, 00:19:43.897 "num_base_bdevs_discovered": 1, 00:19:43.897 "num_base_bdevs_operational": 1, 00:19:43.897 "base_bdevs_list": [ 00:19:43.897 { 00:19:43.897 "name": null, 00:19:43.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.897 "is_configured": false, 00:19:43.897 "data_offset": 256, 00:19:43.897 "data_size": 7936 00:19:43.897 }, 00:19:43.897 { 00:19:43.897 "name": "pt2", 00:19:43.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:43.897 "is_configured": true, 00:19:43.897 "data_offset": 256, 00:19:43.898 "data_size": 7936 00:19:43.898 } 00:19:43.898 ] 00:19:43.898 }' 00:19:43.898 04:10:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.898 04:10:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:44.158 04:10:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:44.158 04:10:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:44.158 04:10:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.158 04:10:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:44.158 04:10:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.158 04:10:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:44.158 04:10:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:44.158 04:10:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.158 04:10:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:44.158 04:10:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:44.158 [2024-12-06 04:10:37.388996] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:44.158 04:10:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.158 04:10:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' e09d5391-85be-470a-b362-bcca1c4fcbce '!=' e09d5391-85be-470a-b362-bcca1c4fcbce ']' 00:19:44.158 04:10:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87721 00:19:44.158 04:10:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87721 ']' 00:19:44.158 04:10:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87721 00:19:44.158 04:10:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:44.158 04:10:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:44.158 04:10:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87721 00:19:44.158 04:10:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:44.158 04:10:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:44.158 04:10:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87721' 00:19:44.158 killing process with pid 87721 00:19:44.158 04:10:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87721 00:19:44.158 [2024-12-06 04:10:37.470389] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:44.158 [2024-12-06 04:10:37.470475] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:44.158 [2024-12-06 04:10:37.470523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:44.158 [2024-12-06 04:10:37.470541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:44.158 04:10:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87721 00:19:44.417 [2024-12-06 04:10:37.686956] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:45.799 04:10:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:19:45.799 00:19:45.799 real 0m5.634s 00:19:45.799 user 0m8.356s 00:19:45.799 sys 0m1.027s 00:19:45.799 04:10:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.799 04:10:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.799 ************************************ 00:19:45.799 END TEST raid_superblock_test_md_separate 00:19:45.799 ************************************ 00:19:45.799 04:10:38 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:19:45.799 04:10:38 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:19:45.799 04:10:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:45.799 04:10:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:45.799 04:10:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:45.799 ************************************ 00:19:45.799 START TEST raid_rebuild_test_sb_md_separate 00:19:45.799 ************************************ 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88041 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88041 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88041 ']' 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.799 04:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.799 [2024-12-06 04:10:38.938580] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:19:45.799 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:45.799 Zero copy mechanism will not be used. 00:19:45.799 [2024-12-06 04:10:38.938783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88041 ] 00:19:45.799 [2024-12-06 04:10:39.110898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.059 [2024-12-06 04:10:39.222464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.319 [2024-12-06 04:10:39.412897] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:46.319 [2024-12-06 04:10:39.412953] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.579 BaseBdev1_malloc 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.579 [2024-12-06 04:10:39.800658] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:46.579 [2024-12-06 04:10:39.800719] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.579 [2024-12-06 04:10:39.800757] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:46.579 [2024-12-06 04:10:39.800769] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.579 [2024-12-06 04:10:39.802613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.579 [2024-12-06 04:10:39.802699] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:46.579 BaseBdev1 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.579 BaseBdev2_malloc 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.579 [2024-12-06 04:10:39.857087] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:46.579 [2024-12-06 04:10:39.857253] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.579 [2024-12-06 04:10:39.857280] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:46.579 [2024-12-06 04:10:39.857293] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.579 [2024-12-06 04:10:39.859211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.579 [2024-12-06 04:10:39.859251] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:46.579 BaseBdev2 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.579 spare_malloc 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.579 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.580 spare_delay 00:19:46.580 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.580 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:46.580 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.580 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.840 [2024-12-06 04:10:39.934464] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:46.840 [2024-12-06 04:10:39.934532] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.840 [2024-12-06 04:10:39.934556] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:46.840 [2024-12-06 04:10:39.934567] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.840 [2024-12-06 04:10:39.936516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.840 [2024-12-06 04:10:39.936600] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:46.841 spare 00:19:46.841 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.841 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:46.841 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.841 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.841 [2024-12-06 04:10:39.946473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:46.841 [2024-12-06 04:10:39.948265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:46.841 [2024-12-06 04:10:39.948439] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:46.841 [2024-12-06 04:10:39.948453] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:46.841 [2024-12-06 04:10:39.948534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:46.841 [2024-12-06 04:10:39.948667] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:46.841 [2024-12-06 04:10:39.948677] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:46.841 [2024-12-06 04:10:39.948781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.841 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.841 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:46.841 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:46.841 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:46.841 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:46.841 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:46.841 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:46.841 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.841 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.841 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.841 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.841 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.841 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.841 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.841 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.841 04:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.841 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.841 "name": "raid_bdev1", 00:19:46.841 "uuid": "92513416-972d-4a25-8b6b-f391d7ae9265", 00:19:46.841 "strip_size_kb": 0, 00:19:46.841 "state": "online", 00:19:46.841 "raid_level": "raid1", 00:19:46.841 "superblock": true, 00:19:46.841 "num_base_bdevs": 2, 00:19:46.841 "num_base_bdevs_discovered": 2, 00:19:46.841 "num_base_bdevs_operational": 2, 00:19:46.841 "base_bdevs_list": [ 00:19:46.841 { 00:19:46.841 "name": "BaseBdev1", 00:19:46.841 "uuid": "01f1c0b4-ae69-5457-8c8b-539a426e73dd", 00:19:46.841 "is_configured": true, 00:19:46.841 "data_offset": 256, 00:19:46.841 "data_size": 7936 00:19:46.841 }, 00:19:46.841 { 00:19:46.841 "name": "BaseBdev2", 00:19:46.841 "uuid": "cc92c573-46b7-522d-93ce-16162d6ae1c5", 00:19:46.841 "is_configured": true, 00:19:46.841 "data_offset": 256, 00:19:46.841 "data_size": 7936 00:19:46.841 } 00:19:46.841 ] 00:19:46.841 }' 00:19:46.841 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.841 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.101 [2024-12-06 04:10:40.350079] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:47.101 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:47.361 [2024-12-06 04:10:40.605398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:47.361 /dev/nbd0 00:19:47.361 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:47.361 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:47.361 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:47.361 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:47.361 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:47.361 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:47.361 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:47.361 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:47.361 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:47.361 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:47.361 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:47.361 1+0 records in 00:19:47.361 1+0 records out 00:19:47.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286167 s, 14.3 MB/s 00:19:47.361 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:47.361 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:47.361 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:47.361 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:47.361 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:47.361 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:47.361 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:47.361 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:47.361 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:47.362 04:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:47.932 7936+0 records in 00:19:47.932 7936+0 records out 00:19:47.932 32505856 bytes (33 MB, 31 MiB) copied, 0.594917 s, 54.6 MB/s 00:19:47.932 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:47.932 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:47.932 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:47.932 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:47.932 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:47.932 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:47.932 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:48.192 [2024-12-06 04:10:41.459118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.192 [2024-12-06 04:10:41.477872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.192 "name": "raid_bdev1", 00:19:48.192 "uuid": "92513416-972d-4a25-8b6b-f391d7ae9265", 00:19:48.192 "strip_size_kb": 0, 00:19:48.192 "state": "online", 00:19:48.192 "raid_level": "raid1", 00:19:48.192 "superblock": true, 00:19:48.192 "num_base_bdevs": 2, 00:19:48.192 "num_base_bdevs_discovered": 1, 00:19:48.192 "num_base_bdevs_operational": 1, 00:19:48.192 "base_bdevs_list": [ 00:19:48.192 { 00:19:48.192 "name": null, 00:19:48.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.192 "is_configured": false, 00:19:48.192 "data_offset": 0, 00:19:48.192 "data_size": 7936 00:19:48.192 }, 00:19:48.192 { 00:19:48.192 "name": "BaseBdev2", 00:19:48.192 "uuid": "cc92c573-46b7-522d-93ce-16162d6ae1c5", 00:19:48.192 "is_configured": true, 00:19:48.192 "data_offset": 256, 00:19:48.192 "data_size": 7936 00:19:48.192 } 00:19:48.192 ] 00:19:48.192 }' 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.192 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.761 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:48.762 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.762 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.762 [2024-12-06 04:10:41.853216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:48.762 [2024-12-06 04:10:41.869430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:48.762 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.762 04:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:48.762 [2024-12-06 04:10:41.871307] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:49.702 04:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:49.702 04:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.702 04:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:49.702 04:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:49.702 04:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.702 04:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.702 04:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.702 04:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.702 04:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.702 04:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.702 04:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.702 "name": "raid_bdev1", 00:19:49.702 "uuid": "92513416-972d-4a25-8b6b-f391d7ae9265", 00:19:49.702 "strip_size_kb": 0, 00:19:49.702 "state": "online", 00:19:49.702 "raid_level": "raid1", 00:19:49.702 "superblock": true, 00:19:49.702 "num_base_bdevs": 2, 00:19:49.702 "num_base_bdevs_discovered": 2, 00:19:49.702 "num_base_bdevs_operational": 2, 00:19:49.702 "process": { 00:19:49.702 "type": "rebuild", 00:19:49.702 "target": "spare", 00:19:49.702 "progress": { 00:19:49.702 "blocks": 2560, 00:19:49.702 "percent": 32 00:19:49.702 } 00:19:49.702 }, 00:19:49.702 "base_bdevs_list": [ 00:19:49.702 { 00:19:49.702 "name": "spare", 00:19:49.702 "uuid": "7f5b5493-8a67-54f9-a325-31cdc4cd0a03", 00:19:49.702 "is_configured": true, 00:19:49.702 "data_offset": 256, 00:19:49.702 "data_size": 7936 00:19:49.702 }, 00:19:49.702 { 00:19:49.702 "name": "BaseBdev2", 00:19:49.702 "uuid": "cc92c573-46b7-522d-93ce-16162d6ae1c5", 00:19:49.702 "is_configured": true, 00:19:49.702 "data_offset": 256, 00:19:49.702 "data_size": 7936 00:19:49.702 } 00:19:49.702 ] 00:19:49.702 }' 00:19:49.702 04:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.702 04:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:49.702 04:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.702 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:49.702 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:49.702 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.702 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.702 [2024-12-06 04:10:43.031295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:49.962 [2024-12-06 04:10:43.076452] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:49.962 [2024-12-06 04:10:43.076511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.962 [2024-12-06 04:10:43.076527] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:49.962 [2024-12-06 04:10:43.076538] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:49.962 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.962 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:49.962 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:49.962 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:49.962 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:49.962 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:49.962 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:49.962 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.962 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.962 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.962 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.962 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.962 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.962 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.963 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.963 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.963 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.963 "name": "raid_bdev1", 00:19:49.963 "uuid": "92513416-972d-4a25-8b6b-f391d7ae9265", 00:19:49.963 "strip_size_kb": 0, 00:19:49.963 "state": "online", 00:19:49.963 "raid_level": "raid1", 00:19:49.963 "superblock": true, 00:19:49.963 "num_base_bdevs": 2, 00:19:49.963 "num_base_bdevs_discovered": 1, 00:19:49.963 "num_base_bdevs_operational": 1, 00:19:49.963 "base_bdevs_list": [ 00:19:49.963 { 00:19:49.963 "name": null, 00:19:49.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.963 "is_configured": false, 00:19:49.963 "data_offset": 0, 00:19:49.963 "data_size": 7936 00:19:49.963 }, 00:19:49.963 { 00:19:49.963 "name": "BaseBdev2", 00:19:49.963 "uuid": "cc92c573-46b7-522d-93ce-16162d6ae1c5", 00:19:49.963 "is_configured": true, 00:19:49.963 "data_offset": 256, 00:19:49.963 "data_size": 7936 00:19:49.963 } 00:19:49.963 ] 00:19:49.963 }' 00:19:49.963 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.963 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.223 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:50.223 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.223 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:50.223 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:50.223 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:50.223 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.223 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.223 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.223 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.223 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.223 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:50.223 "name": "raid_bdev1", 00:19:50.223 "uuid": "92513416-972d-4a25-8b6b-f391d7ae9265", 00:19:50.223 "strip_size_kb": 0, 00:19:50.223 "state": "online", 00:19:50.223 "raid_level": "raid1", 00:19:50.223 "superblock": true, 00:19:50.223 "num_base_bdevs": 2, 00:19:50.223 "num_base_bdevs_discovered": 1, 00:19:50.223 "num_base_bdevs_operational": 1, 00:19:50.223 "base_bdevs_list": [ 00:19:50.223 { 00:19:50.223 "name": null, 00:19:50.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.223 "is_configured": false, 00:19:50.223 "data_offset": 0, 00:19:50.223 "data_size": 7936 00:19:50.223 }, 00:19:50.223 { 00:19:50.223 "name": "BaseBdev2", 00:19:50.223 "uuid": "cc92c573-46b7-522d-93ce-16162d6ae1c5", 00:19:50.223 "is_configured": true, 00:19:50.223 "data_offset": 256, 00:19:50.223 "data_size": 7936 00:19:50.223 } 00:19:50.223 ] 00:19:50.223 }' 00:19:50.223 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.223 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:50.223 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.483 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:50.483 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:50.483 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.483 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.483 [2024-12-06 04:10:43.596063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:50.483 [2024-12-06 04:10:43.609760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:50.483 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.483 04:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:50.483 [2024-12-06 04:10:43.611500] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.424 "name": "raid_bdev1", 00:19:51.424 "uuid": "92513416-972d-4a25-8b6b-f391d7ae9265", 00:19:51.424 "strip_size_kb": 0, 00:19:51.424 "state": "online", 00:19:51.424 "raid_level": "raid1", 00:19:51.424 "superblock": true, 00:19:51.424 "num_base_bdevs": 2, 00:19:51.424 "num_base_bdevs_discovered": 2, 00:19:51.424 "num_base_bdevs_operational": 2, 00:19:51.424 "process": { 00:19:51.424 "type": "rebuild", 00:19:51.424 "target": "spare", 00:19:51.424 "progress": { 00:19:51.424 "blocks": 2560, 00:19:51.424 "percent": 32 00:19:51.424 } 00:19:51.424 }, 00:19:51.424 "base_bdevs_list": [ 00:19:51.424 { 00:19:51.424 "name": "spare", 00:19:51.424 "uuid": "7f5b5493-8a67-54f9-a325-31cdc4cd0a03", 00:19:51.424 "is_configured": true, 00:19:51.424 "data_offset": 256, 00:19:51.424 "data_size": 7936 00:19:51.424 }, 00:19:51.424 { 00:19:51.424 "name": "BaseBdev2", 00:19:51.424 "uuid": "cc92c573-46b7-522d-93ce-16162d6ae1c5", 00:19:51.424 "is_configured": true, 00:19:51.424 "data_offset": 256, 00:19:51.424 "data_size": 7936 00:19:51.424 } 00:19:51.424 ] 00:19:51.424 }' 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:51.424 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=722 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.424 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.685 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.685 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.685 "name": "raid_bdev1", 00:19:51.685 "uuid": "92513416-972d-4a25-8b6b-f391d7ae9265", 00:19:51.685 "strip_size_kb": 0, 00:19:51.685 "state": "online", 00:19:51.685 "raid_level": "raid1", 00:19:51.685 "superblock": true, 00:19:51.685 "num_base_bdevs": 2, 00:19:51.685 "num_base_bdevs_discovered": 2, 00:19:51.685 "num_base_bdevs_operational": 2, 00:19:51.685 "process": { 00:19:51.685 "type": "rebuild", 00:19:51.685 "target": "spare", 00:19:51.685 "progress": { 00:19:51.685 "blocks": 2816, 00:19:51.685 "percent": 35 00:19:51.685 } 00:19:51.685 }, 00:19:51.685 "base_bdevs_list": [ 00:19:51.685 { 00:19:51.685 "name": "spare", 00:19:51.685 "uuid": "7f5b5493-8a67-54f9-a325-31cdc4cd0a03", 00:19:51.685 "is_configured": true, 00:19:51.685 "data_offset": 256, 00:19:51.685 "data_size": 7936 00:19:51.685 }, 00:19:51.685 { 00:19:51.685 "name": "BaseBdev2", 00:19:51.685 "uuid": "cc92c573-46b7-522d-93ce-16162d6ae1c5", 00:19:51.685 "is_configured": true, 00:19:51.685 "data_offset": 256, 00:19:51.685 "data_size": 7936 00:19:51.685 } 00:19:51.685 ] 00:19:51.685 }' 00:19:51.685 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.685 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:51.685 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:51.685 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:51.685 04:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:52.620 04:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:52.620 04:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.620 04:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.620 04:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:52.620 04:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:52.620 04:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.620 04:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.620 04:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.620 04:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.620 04:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.620 04:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.620 04:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.620 "name": "raid_bdev1", 00:19:52.620 "uuid": "92513416-972d-4a25-8b6b-f391d7ae9265", 00:19:52.620 "strip_size_kb": 0, 00:19:52.620 "state": "online", 00:19:52.620 "raid_level": "raid1", 00:19:52.620 "superblock": true, 00:19:52.620 "num_base_bdevs": 2, 00:19:52.620 "num_base_bdevs_discovered": 2, 00:19:52.620 "num_base_bdevs_operational": 2, 00:19:52.620 "process": { 00:19:52.620 "type": "rebuild", 00:19:52.620 "target": "spare", 00:19:52.620 "progress": { 00:19:52.620 "blocks": 5632, 00:19:52.620 "percent": 70 00:19:52.620 } 00:19:52.620 }, 00:19:52.620 "base_bdevs_list": [ 00:19:52.620 { 00:19:52.620 "name": "spare", 00:19:52.620 "uuid": "7f5b5493-8a67-54f9-a325-31cdc4cd0a03", 00:19:52.620 "is_configured": true, 00:19:52.620 "data_offset": 256, 00:19:52.620 "data_size": 7936 00:19:52.620 }, 00:19:52.621 { 00:19:52.621 "name": "BaseBdev2", 00:19:52.621 "uuid": "cc92c573-46b7-522d-93ce-16162d6ae1c5", 00:19:52.621 "is_configured": true, 00:19:52.621 "data_offset": 256, 00:19:52.621 "data_size": 7936 00:19:52.621 } 00:19:52.621 ] 00:19:52.621 }' 00:19:52.621 04:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.879 04:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:52.879 04:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.879 04:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.879 04:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:53.447 [2024-12-06 04:10:46.725124] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:53.447 [2024-12-06 04:10:46.725281] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:53.447 [2024-12-06 04:10:46.725392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.707 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:53.707 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.707 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.707 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:53.707 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:53.707 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.707 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.707 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.707 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.707 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.967 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.967 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.967 "name": "raid_bdev1", 00:19:53.967 "uuid": "92513416-972d-4a25-8b6b-f391d7ae9265", 00:19:53.967 "strip_size_kb": 0, 00:19:53.967 "state": "online", 00:19:53.967 "raid_level": "raid1", 00:19:53.967 "superblock": true, 00:19:53.967 "num_base_bdevs": 2, 00:19:53.967 "num_base_bdevs_discovered": 2, 00:19:53.967 "num_base_bdevs_operational": 2, 00:19:53.968 "base_bdevs_list": [ 00:19:53.968 { 00:19:53.968 "name": "spare", 00:19:53.968 "uuid": "7f5b5493-8a67-54f9-a325-31cdc4cd0a03", 00:19:53.968 "is_configured": true, 00:19:53.968 "data_offset": 256, 00:19:53.968 "data_size": 7936 00:19:53.968 }, 00:19:53.968 { 00:19:53.968 "name": "BaseBdev2", 00:19:53.968 "uuid": "cc92c573-46b7-522d-93ce-16162d6ae1c5", 00:19:53.968 "is_configured": true, 00:19:53.968 "data_offset": 256, 00:19:53.968 "data_size": 7936 00:19:53.968 } 00:19:53.968 ] 00:19:53.968 }' 00:19:53.968 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.968 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:53.968 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.968 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:53.968 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:19:53.968 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:53.968 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.968 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:53.968 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:53.968 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.968 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.968 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.968 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.968 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.968 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.968 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.968 "name": "raid_bdev1", 00:19:53.968 "uuid": "92513416-972d-4a25-8b6b-f391d7ae9265", 00:19:53.968 "strip_size_kb": 0, 00:19:53.968 "state": "online", 00:19:53.968 "raid_level": "raid1", 00:19:53.968 "superblock": true, 00:19:53.968 "num_base_bdevs": 2, 00:19:53.968 "num_base_bdevs_discovered": 2, 00:19:53.968 "num_base_bdevs_operational": 2, 00:19:53.968 "base_bdevs_list": [ 00:19:53.968 { 00:19:53.968 "name": "spare", 00:19:53.968 "uuid": "7f5b5493-8a67-54f9-a325-31cdc4cd0a03", 00:19:53.968 "is_configured": true, 00:19:53.968 "data_offset": 256, 00:19:53.968 "data_size": 7936 00:19:53.968 }, 00:19:53.968 { 00:19:53.968 "name": "BaseBdev2", 00:19:53.968 "uuid": "cc92c573-46b7-522d-93ce-16162d6ae1c5", 00:19:53.968 "is_configured": true, 00:19:53.968 "data_offset": 256, 00:19:53.968 "data_size": 7936 00:19:53.968 } 00:19:53.968 ] 00:19:53.968 }' 00:19:53.968 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.968 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:53.968 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.226 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:54.226 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:54.226 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:54.226 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.226 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:54.226 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:54.226 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:54.226 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.226 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.226 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.226 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.226 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.226 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.226 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.226 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.226 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.226 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.226 "name": "raid_bdev1", 00:19:54.226 "uuid": "92513416-972d-4a25-8b6b-f391d7ae9265", 00:19:54.226 "strip_size_kb": 0, 00:19:54.226 "state": "online", 00:19:54.226 "raid_level": "raid1", 00:19:54.226 "superblock": true, 00:19:54.226 "num_base_bdevs": 2, 00:19:54.226 "num_base_bdevs_discovered": 2, 00:19:54.226 "num_base_bdevs_operational": 2, 00:19:54.226 "base_bdevs_list": [ 00:19:54.226 { 00:19:54.226 "name": "spare", 00:19:54.226 "uuid": "7f5b5493-8a67-54f9-a325-31cdc4cd0a03", 00:19:54.226 "is_configured": true, 00:19:54.226 "data_offset": 256, 00:19:54.226 "data_size": 7936 00:19:54.226 }, 00:19:54.226 { 00:19:54.226 "name": "BaseBdev2", 00:19:54.226 "uuid": "cc92c573-46b7-522d-93ce-16162d6ae1c5", 00:19:54.226 "is_configured": true, 00:19:54.226 "data_offset": 256, 00:19:54.226 "data_size": 7936 00:19:54.226 } 00:19:54.226 ] 00:19:54.226 }' 00:19:54.226 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.226 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.484 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:54.484 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.484 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.484 [2024-12-06 04:10:47.803592] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:54.484 [2024-12-06 04:10:47.803684] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:54.484 [2024-12-06 04:10:47.803792] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:54.484 [2024-12-06 04:10:47.803879] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:54.484 [2024-12-06 04:10:47.803928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:54.484 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.484 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.484 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:19:54.484 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.484 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.484 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.744 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:54.744 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:54.744 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:54.744 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:54.744 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:54.744 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:54.744 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:54.744 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:54.744 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:54.744 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:54.744 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:54.744 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:54.744 04:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:54.744 /dev/nbd0 00:19:54.744 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:54.744 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:54.744 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:54.744 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:54.744 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:54.744 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:54.744 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:54.744 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:54.744 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:54.744 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:54.744 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:54.744 1+0 records in 00:19:54.744 1+0 records out 00:19:54.744 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034332 s, 11.9 MB/s 00:19:54.744 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:54.744 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:54.744 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:55.004 /dev/nbd1 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:55.004 1+0 records in 00:19:55.004 1+0 records out 00:19:55.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215918 s, 19.0 MB/s 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:55.004 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:55.310 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:55.311 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:55.311 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:55.311 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:55.311 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:55.311 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:55.311 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:55.571 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:55.571 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:55.571 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:55.571 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:55.571 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:55.571 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:55.571 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:55.571 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:55.571 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:55.571 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:55.830 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:55.830 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:55.830 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:55.830 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:55.830 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:55.830 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:55.830 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:55.830 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:55.830 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:55.830 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:55.830 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.830 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:55.830 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.830 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:55.831 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.831 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:55.831 [2024-12-06 04:10:48.962279] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:55.831 [2024-12-06 04:10:48.962340] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.831 [2024-12-06 04:10:48.962362] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:55.831 [2024-12-06 04:10:48.962372] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.831 [2024-12-06 04:10:48.964367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.831 [2024-12-06 04:10:48.964406] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:55.831 [2024-12-06 04:10:48.964474] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:55.831 [2024-12-06 04:10:48.964540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:55.831 [2024-12-06 04:10:48.964692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:55.831 spare 00:19:55.831 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.831 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:55.831 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.831 04:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:55.831 [2024-12-06 04:10:49.064579] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:55.831 [2024-12-06 04:10:49.064660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:55.831 [2024-12-06 04:10:49.064776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:55.831 [2024-12-06 04:10:49.064926] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:55.831 [2024-12-06 04:10:49.064934] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:55.831 [2024-12-06 04:10:49.065077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.831 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.831 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:55.831 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:55.831 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:55.831 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:55.831 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:55.831 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:55.831 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.831 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.831 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.831 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.831 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.831 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.831 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:55.831 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.831 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.831 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.831 "name": "raid_bdev1", 00:19:55.831 "uuid": "92513416-972d-4a25-8b6b-f391d7ae9265", 00:19:55.831 "strip_size_kb": 0, 00:19:55.831 "state": "online", 00:19:55.831 "raid_level": "raid1", 00:19:55.831 "superblock": true, 00:19:55.831 "num_base_bdevs": 2, 00:19:55.831 "num_base_bdevs_discovered": 2, 00:19:55.831 "num_base_bdevs_operational": 2, 00:19:55.831 "base_bdevs_list": [ 00:19:55.831 { 00:19:55.831 "name": "spare", 00:19:55.831 "uuid": "7f5b5493-8a67-54f9-a325-31cdc4cd0a03", 00:19:55.831 "is_configured": true, 00:19:55.831 "data_offset": 256, 00:19:55.831 "data_size": 7936 00:19:55.831 }, 00:19:55.831 { 00:19:55.831 "name": "BaseBdev2", 00:19:55.831 "uuid": "cc92c573-46b7-522d-93ce-16162d6ae1c5", 00:19:55.831 "is_configured": true, 00:19:55.831 "data_offset": 256, 00:19:55.831 "data_size": 7936 00:19:55.831 } 00:19:55.831 ] 00:19:55.831 }' 00:19:55.831 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.831 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.400 "name": "raid_bdev1", 00:19:56.400 "uuid": "92513416-972d-4a25-8b6b-f391d7ae9265", 00:19:56.400 "strip_size_kb": 0, 00:19:56.400 "state": "online", 00:19:56.400 "raid_level": "raid1", 00:19:56.400 "superblock": true, 00:19:56.400 "num_base_bdevs": 2, 00:19:56.400 "num_base_bdevs_discovered": 2, 00:19:56.400 "num_base_bdevs_operational": 2, 00:19:56.400 "base_bdevs_list": [ 00:19:56.400 { 00:19:56.400 "name": "spare", 00:19:56.400 "uuid": "7f5b5493-8a67-54f9-a325-31cdc4cd0a03", 00:19:56.400 "is_configured": true, 00:19:56.400 "data_offset": 256, 00:19:56.400 "data_size": 7936 00:19:56.400 }, 00:19:56.400 { 00:19:56.400 "name": "BaseBdev2", 00:19:56.400 "uuid": "cc92c573-46b7-522d-93ce-16162d6ae1c5", 00:19:56.400 "is_configured": true, 00:19:56.400 "data_offset": 256, 00:19:56.400 "data_size": 7936 00:19:56.400 } 00:19:56.400 ] 00:19:56.400 }' 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.400 [2024-12-06 04:10:49.721060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.400 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.658 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.659 "name": "raid_bdev1", 00:19:56.659 "uuid": "92513416-972d-4a25-8b6b-f391d7ae9265", 00:19:56.659 "strip_size_kb": 0, 00:19:56.659 "state": "online", 00:19:56.659 "raid_level": "raid1", 00:19:56.659 "superblock": true, 00:19:56.659 "num_base_bdevs": 2, 00:19:56.659 "num_base_bdevs_discovered": 1, 00:19:56.659 "num_base_bdevs_operational": 1, 00:19:56.659 "base_bdevs_list": [ 00:19:56.659 { 00:19:56.659 "name": null, 00:19:56.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.659 "is_configured": false, 00:19:56.659 "data_offset": 0, 00:19:56.659 "data_size": 7936 00:19:56.659 }, 00:19:56.659 { 00:19:56.659 "name": "BaseBdev2", 00:19:56.659 "uuid": "cc92c573-46b7-522d-93ce-16162d6ae1c5", 00:19:56.659 "is_configured": true, 00:19:56.659 "data_offset": 256, 00:19:56.659 "data_size": 7936 00:19:56.659 } 00:19:56.659 ] 00:19:56.659 }' 00:19:56.659 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.659 04:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.916 04:10:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:56.916 04:10:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.916 04:10:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.916 [2024-12-06 04:10:50.100429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:56.916 [2024-12-06 04:10:50.100702] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:56.916 [2024-12-06 04:10:50.100773] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:56.916 [2024-12-06 04:10:50.100837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:56.916 [2024-12-06 04:10:50.114033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:56.916 04:10:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.917 04:10:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:56.917 [2024-12-06 04:10:50.115854] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:57.856 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:57.856 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:57.856 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:57.856 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:57.856 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:57.856 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.856 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.856 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.856 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.856 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.856 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:57.856 "name": "raid_bdev1", 00:19:57.856 "uuid": "92513416-972d-4a25-8b6b-f391d7ae9265", 00:19:57.856 "strip_size_kb": 0, 00:19:57.856 "state": "online", 00:19:57.856 "raid_level": "raid1", 00:19:57.856 "superblock": true, 00:19:57.856 "num_base_bdevs": 2, 00:19:57.856 "num_base_bdevs_discovered": 2, 00:19:57.856 "num_base_bdevs_operational": 2, 00:19:57.856 "process": { 00:19:57.856 "type": "rebuild", 00:19:57.856 "target": "spare", 00:19:57.856 "progress": { 00:19:57.856 "blocks": 2560, 00:19:57.856 "percent": 32 00:19:57.856 } 00:19:57.856 }, 00:19:57.856 "base_bdevs_list": [ 00:19:57.856 { 00:19:57.856 "name": "spare", 00:19:57.856 "uuid": "7f5b5493-8a67-54f9-a325-31cdc4cd0a03", 00:19:57.856 "is_configured": true, 00:19:57.856 "data_offset": 256, 00:19:57.856 "data_size": 7936 00:19:57.856 }, 00:19:57.856 { 00:19:57.856 "name": "BaseBdev2", 00:19:57.856 "uuid": "cc92c573-46b7-522d-93ce-16162d6ae1c5", 00:19:57.856 "is_configured": true, 00:19:57.856 "data_offset": 256, 00:19:57.856 "data_size": 7936 00:19:57.856 } 00:19:57.856 ] 00:19:57.856 }' 00:19:57.856 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:57.856 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:57.856 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.116 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:58.116 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:58.116 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.116 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.116 [2024-12-06 04:10:51.251819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:58.116 [2024-12-06 04:10:51.321313] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:58.116 [2024-12-06 04:10:51.321378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:58.116 [2024-12-06 04:10:51.321393] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:58.116 [2024-12-06 04:10:51.321413] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:58.116 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.116 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:58.116 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.116 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:58.116 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:58.116 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:58.116 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:58.116 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.116 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.116 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.116 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.116 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.116 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.116 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.116 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.116 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.116 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.116 "name": "raid_bdev1", 00:19:58.116 "uuid": "92513416-972d-4a25-8b6b-f391d7ae9265", 00:19:58.116 "strip_size_kb": 0, 00:19:58.116 "state": "online", 00:19:58.116 "raid_level": "raid1", 00:19:58.116 "superblock": true, 00:19:58.116 "num_base_bdevs": 2, 00:19:58.116 "num_base_bdevs_discovered": 1, 00:19:58.116 "num_base_bdevs_operational": 1, 00:19:58.116 "base_bdevs_list": [ 00:19:58.116 { 00:19:58.116 "name": null, 00:19:58.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.116 "is_configured": false, 00:19:58.116 "data_offset": 0, 00:19:58.116 "data_size": 7936 00:19:58.116 }, 00:19:58.116 { 00:19:58.116 "name": "BaseBdev2", 00:19:58.116 "uuid": "cc92c573-46b7-522d-93ce-16162d6ae1c5", 00:19:58.116 "is_configured": true, 00:19:58.116 "data_offset": 256, 00:19:58.116 "data_size": 7936 00:19:58.116 } 00:19:58.116 ] 00:19:58.116 }' 00:19:58.116 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.116 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.684 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:58.684 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.684 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.684 [2024-12-06 04:10:51.749980] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:58.684 [2024-12-06 04:10:51.750168] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:58.684 [2024-12-06 04:10:51.750215] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:58.684 [2024-12-06 04:10:51.750246] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:58.684 [2024-12-06 04:10:51.750537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:58.684 [2024-12-06 04:10:51.750598] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:58.684 [2024-12-06 04:10:51.750698] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:58.684 [2024-12-06 04:10:51.750742] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:58.684 [2024-12-06 04:10:51.750786] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:58.684 [2024-12-06 04:10:51.750839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:58.684 [2024-12-06 04:10:51.764985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:58.684 spare 00:19:58.684 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.684 04:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:58.684 [2024-12-06 04:10:51.766876] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:59.623 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:59.623 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:59.623 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:59.623 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:59.623 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:59.623 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.623 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.623 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:59.623 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.623 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.623 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:59.623 "name": "raid_bdev1", 00:19:59.623 "uuid": "92513416-972d-4a25-8b6b-f391d7ae9265", 00:19:59.623 "strip_size_kb": 0, 00:19:59.623 "state": "online", 00:19:59.623 "raid_level": "raid1", 00:19:59.623 "superblock": true, 00:19:59.623 "num_base_bdevs": 2, 00:19:59.623 "num_base_bdevs_discovered": 2, 00:19:59.623 "num_base_bdevs_operational": 2, 00:19:59.623 "process": { 00:19:59.623 "type": "rebuild", 00:19:59.623 "target": "spare", 00:19:59.623 "progress": { 00:19:59.623 "blocks": 2560, 00:19:59.623 "percent": 32 00:19:59.623 } 00:19:59.623 }, 00:19:59.623 "base_bdevs_list": [ 00:19:59.623 { 00:19:59.623 "name": "spare", 00:19:59.623 "uuid": "7f5b5493-8a67-54f9-a325-31cdc4cd0a03", 00:19:59.623 "is_configured": true, 00:19:59.623 "data_offset": 256, 00:19:59.623 "data_size": 7936 00:19:59.623 }, 00:19:59.623 { 00:19:59.623 "name": "BaseBdev2", 00:19:59.623 "uuid": "cc92c573-46b7-522d-93ce-16162d6ae1c5", 00:19:59.623 "is_configured": true, 00:19:59.623 "data_offset": 256, 00:19:59.623 "data_size": 7936 00:19:59.623 } 00:19:59.623 ] 00:19:59.623 }' 00:19:59.623 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:59.623 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:59.623 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:59.623 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:59.623 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:59.623 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.623 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:59.623 [2024-12-06 04:10:52.906804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:59.623 [2024-12-06 04:10:52.972234] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:59.623 [2024-12-06 04:10:52.972345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:59.623 [2024-12-06 04:10:52.972399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:59.623 [2024-12-06 04:10:52.972420] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:59.883 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.883 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:59.883 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:59.883 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:59.883 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:59.883 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:59.884 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:59.884 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.884 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.884 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.884 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.884 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.884 04:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.884 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.884 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:59.884 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.884 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.884 "name": "raid_bdev1", 00:19:59.884 "uuid": "92513416-972d-4a25-8b6b-f391d7ae9265", 00:19:59.884 "strip_size_kb": 0, 00:19:59.884 "state": "online", 00:19:59.884 "raid_level": "raid1", 00:19:59.884 "superblock": true, 00:19:59.884 "num_base_bdevs": 2, 00:19:59.884 "num_base_bdevs_discovered": 1, 00:19:59.884 "num_base_bdevs_operational": 1, 00:19:59.884 "base_bdevs_list": [ 00:19:59.884 { 00:19:59.884 "name": null, 00:19:59.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.884 "is_configured": false, 00:19:59.884 "data_offset": 0, 00:19:59.884 "data_size": 7936 00:19:59.884 }, 00:19:59.884 { 00:19:59.884 "name": "BaseBdev2", 00:19:59.884 "uuid": "cc92c573-46b7-522d-93ce-16162d6ae1c5", 00:19:59.884 "is_configured": true, 00:19:59.884 "data_offset": 256, 00:19:59.884 "data_size": 7936 00:19:59.884 } 00:19:59.884 ] 00:19:59.884 }' 00:19:59.884 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.884 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.143 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:00.143 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.143 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:00.143 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:00.143 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.143 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.143 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.143 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.143 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.143 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.143 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.143 "name": "raid_bdev1", 00:20:00.143 "uuid": "92513416-972d-4a25-8b6b-f391d7ae9265", 00:20:00.143 "strip_size_kb": 0, 00:20:00.143 "state": "online", 00:20:00.143 "raid_level": "raid1", 00:20:00.143 "superblock": true, 00:20:00.143 "num_base_bdevs": 2, 00:20:00.143 "num_base_bdevs_discovered": 1, 00:20:00.143 "num_base_bdevs_operational": 1, 00:20:00.143 "base_bdevs_list": [ 00:20:00.143 { 00:20:00.143 "name": null, 00:20:00.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.143 "is_configured": false, 00:20:00.143 "data_offset": 0, 00:20:00.143 "data_size": 7936 00:20:00.143 }, 00:20:00.143 { 00:20:00.143 "name": "BaseBdev2", 00:20:00.143 "uuid": "cc92c573-46b7-522d-93ce-16162d6ae1c5", 00:20:00.143 "is_configured": true, 00:20:00.143 "data_offset": 256, 00:20:00.143 "data_size": 7936 00:20:00.143 } 00:20:00.143 ] 00:20:00.143 }' 00:20:00.143 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.403 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:00.403 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.403 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:00.403 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:00.403 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.403 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.403 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.403 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:00.403 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.403 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.403 [2024-12-06 04:10:53.571030] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:00.403 [2024-12-06 04:10:53.571094] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.403 [2024-12-06 04:10:53.571117] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:00.403 [2024-12-06 04:10:53.571125] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.403 [2024-12-06 04:10:53.571350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.403 [2024-12-06 04:10:53.571361] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:00.403 [2024-12-06 04:10:53.571409] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:00.403 [2024-12-06 04:10:53.571422] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:00.403 [2024-12-06 04:10:53.571431] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:00.403 [2024-12-06 04:10:53.571442] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:00.403 BaseBdev1 00:20:00.403 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.403 04:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:01.342 04:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:01.342 04:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:01.342 04:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:01.342 04:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:01.342 04:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:01.342 04:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:01.342 04:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.342 04:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.342 04:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.342 04:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.342 04:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.342 04:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.342 04:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.342 04:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:01.342 04:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.342 04:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.342 "name": "raid_bdev1", 00:20:01.342 "uuid": "92513416-972d-4a25-8b6b-f391d7ae9265", 00:20:01.342 "strip_size_kb": 0, 00:20:01.342 "state": "online", 00:20:01.342 "raid_level": "raid1", 00:20:01.342 "superblock": true, 00:20:01.342 "num_base_bdevs": 2, 00:20:01.342 "num_base_bdevs_discovered": 1, 00:20:01.342 "num_base_bdevs_operational": 1, 00:20:01.342 "base_bdevs_list": [ 00:20:01.342 { 00:20:01.342 "name": null, 00:20:01.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.342 "is_configured": false, 00:20:01.342 "data_offset": 0, 00:20:01.343 "data_size": 7936 00:20:01.343 }, 00:20:01.343 { 00:20:01.343 "name": "BaseBdev2", 00:20:01.343 "uuid": "cc92c573-46b7-522d-93ce-16162d6ae1c5", 00:20:01.343 "is_configured": true, 00:20:01.343 "data_offset": 256, 00:20:01.343 "data_size": 7936 00:20:01.343 } 00:20:01.343 ] 00:20:01.343 }' 00:20:01.343 04:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.343 04:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:01.979 "name": "raid_bdev1", 00:20:01.979 "uuid": "92513416-972d-4a25-8b6b-f391d7ae9265", 00:20:01.979 "strip_size_kb": 0, 00:20:01.979 "state": "online", 00:20:01.979 "raid_level": "raid1", 00:20:01.979 "superblock": true, 00:20:01.979 "num_base_bdevs": 2, 00:20:01.979 "num_base_bdevs_discovered": 1, 00:20:01.979 "num_base_bdevs_operational": 1, 00:20:01.979 "base_bdevs_list": [ 00:20:01.979 { 00:20:01.979 "name": null, 00:20:01.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.979 "is_configured": false, 00:20:01.979 "data_offset": 0, 00:20:01.979 "data_size": 7936 00:20:01.979 }, 00:20:01.979 { 00:20:01.979 "name": "BaseBdev2", 00:20:01.979 "uuid": "cc92c573-46b7-522d-93ce-16162d6ae1c5", 00:20:01.979 "is_configured": true, 00:20:01.979 "data_offset": 256, 00:20:01.979 "data_size": 7936 00:20:01.979 } 00:20:01.979 ] 00:20:01.979 }' 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:01.979 [2024-12-06 04:10:55.208566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:01.979 [2024-12-06 04:10:55.208804] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:01.979 [2024-12-06 04:10:55.208823] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:01.979 request: 00:20:01.979 { 00:20:01.979 "base_bdev": "BaseBdev1", 00:20:01.979 "raid_bdev": "raid_bdev1", 00:20:01.979 "method": "bdev_raid_add_base_bdev", 00:20:01.979 "req_id": 1 00:20:01.979 } 00:20:01.979 Got JSON-RPC error response 00:20:01.979 response: 00:20:01.979 { 00:20:01.979 "code": -22, 00:20:01.979 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:01.979 } 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:01.979 04:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:02.918 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:02.918 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.918 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.918 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.918 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.918 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:02.918 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.918 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.918 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.918 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.918 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.918 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.918 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.918 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:02.918 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.177 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.178 "name": "raid_bdev1", 00:20:03.178 "uuid": "92513416-972d-4a25-8b6b-f391d7ae9265", 00:20:03.178 "strip_size_kb": 0, 00:20:03.178 "state": "online", 00:20:03.178 "raid_level": "raid1", 00:20:03.178 "superblock": true, 00:20:03.178 "num_base_bdevs": 2, 00:20:03.178 "num_base_bdevs_discovered": 1, 00:20:03.178 "num_base_bdevs_operational": 1, 00:20:03.178 "base_bdevs_list": [ 00:20:03.178 { 00:20:03.178 "name": null, 00:20:03.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.178 "is_configured": false, 00:20:03.178 "data_offset": 0, 00:20:03.178 "data_size": 7936 00:20:03.178 }, 00:20:03.178 { 00:20:03.178 "name": "BaseBdev2", 00:20:03.178 "uuid": "cc92c573-46b7-522d-93ce-16162d6ae1c5", 00:20:03.178 "is_configured": true, 00:20:03.178 "data_offset": 256, 00:20:03.178 "data_size": 7936 00:20:03.178 } 00:20:03.178 ] 00:20:03.178 }' 00:20:03.178 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.178 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.437 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:03.437 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:03.437 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:03.437 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:03.437 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:03.437 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.437 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.437 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.437 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.437 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.437 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:03.437 "name": "raid_bdev1", 00:20:03.437 "uuid": "92513416-972d-4a25-8b6b-f391d7ae9265", 00:20:03.437 "strip_size_kb": 0, 00:20:03.437 "state": "online", 00:20:03.437 "raid_level": "raid1", 00:20:03.437 "superblock": true, 00:20:03.437 "num_base_bdevs": 2, 00:20:03.437 "num_base_bdevs_discovered": 1, 00:20:03.437 "num_base_bdevs_operational": 1, 00:20:03.437 "base_bdevs_list": [ 00:20:03.437 { 00:20:03.437 "name": null, 00:20:03.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.437 "is_configured": false, 00:20:03.437 "data_offset": 0, 00:20:03.437 "data_size": 7936 00:20:03.437 }, 00:20:03.437 { 00:20:03.437 "name": "BaseBdev2", 00:20:03.437 "uuid": "cc92c573-46b7-522d-93ce-16162d6ae1c5", 00:20:03.437 "is_configured": true, 00:20:03.437 "data_offset": 256, 00:20:03.437 "data_size": 7936 00:20:03.437 } 00:20:03.437 ] 00:20:03.437 }' 00:20:03.437 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:03.437 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:03.437 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:03.698 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:03.698 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88041 00:20:03.698 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88041 ']' 00:20:03.698 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88041 00:20:03.698 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:20:03.698 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.698 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88041 00:20:03.698 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:03.698 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:03.698 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88041' 00:20:03.698 killing process with pid 88041 00:20:03.698 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88041 00:20:03.698 Received shutdown signal, test time was about 60.000000 seconds 00:20:03.698 00:20:03.698 Latency(us) 00:20:03.698 [2024-12-06T04:10:57.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.698 [2024-12-06T04:10:57.052Z] =================================================================================================================== 00:20:03.698 [2024-12-06T04:10:57.052Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:03.698 [2024-12-06 04:10:56.853184] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:03.698 [2024-12-06 04:10:56.853313] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:03.698 04:10:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88041 00:20:03.698 [2024-12-06 04:10:56.853360] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:03.698 [2024-12-06 04:10:56.853372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:03.958 [2024-12-06 04:10:57.170171] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:04.898 04:10:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:20:04.898 ************************************ 00:20:04.898 END TEST raid_rebuild_test_sb_md_separate 00:20:04.898 ************************************ 00:20:04.898 00:20:04.898 real 0m19.410s 00:20:04.898 user 0m25.206s 00:20:04.898 sys 0m2.429s 00:20:04.898 04:10:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:04.898 04:10:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.159 04:10:58 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:20:05.159 04:10:58 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:20:05.159 04:10:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:05.159 04:10:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:05.159 04:10:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:05.159 ************************************ 00:20:05.159 START TEST raid_state_function_test_sb_md_interleaved 00:20:05.159 ************************************ 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88727 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88727' 00:20:05.159 Process raid pid: 88727 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88727 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88727 ']' 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.159 04:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.159 [2024-12-06 04:10:58.411866] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:20:05.159 [2024-12-06 04:10:58.411977] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.419 [2024-12-06 04:10:58.561455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.419 [2024-12-06 04:10:58.672321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.680 [2024-12-06 04:10:58.870825] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:05.680 [2024-12-06 04:10:58.870866] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:06.250 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.250 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:06.250 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:06.250 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.250 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.250 [2024-12-06 04:10:59.301497] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:06.250 [2024-12-06 04:10:59.301554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:06.250 [2024-12-06 04:10:59.301565] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:06.250 [2024-12-06 04:10:59.301575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:06.250 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.250 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:06.250 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:06.250 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:06.250 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:06.250 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:06.250 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:06.250 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.250 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.250 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.250 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.250 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.250 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:06.250 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.250 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.250 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.250 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.250 "name": "Existed_Raid", 00:20:06.250 "uuid": "19d1bd34-4fc3-4964-957d-fa42ac320692", 00:20:06.250 "strip_size_kb": 0, 00:20:06.250 "state": "configuring", 00:20:06.250 "raid_level": "raid1", 00:20:06.251 "superblock": true, 00:20:06.251 "num_base_bdevs": 2, 00:20:06.251 "num_base_bdevs_discovered": 0, 00:20:06.251 "num_base_bdevs_operational": 2, 00:20:06.251 "base_bdevs_list": [ 00:20:06.251 { 00:20:06.251 "name": "BaseBdev1", 00:20:06.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.251 "is_configured": false, 00:20:06.251 "data_offset": 0, 00:20:06.251 "data_size": 0 00:20:06.251 }, 00:20:06.251 { 00:20:06.251 "name": "BaseBdev2", 00:20:06.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.251 "is_configured": false, 00:20:06.251 "data_offset": 0, 00:20:06.251 "data_size": 0 00:20:06.251 } 00:20:06.251 ] 00:20:06.251 }' 00:20:06.251 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.251 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.511 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:06.511 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.511 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.511 [2024-12-06 04:10:59.736719] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:06.511 [2024-12-06 04:10:59.736757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:06.511 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.511 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:06.511 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.511 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.511 [2024-12-06 04:10:59.748679] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:06.511 [2024-12-06 04:10:59.748715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:06.511 [2024-12-06 04:10:59.748724] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:06.511 [2024-12-06 04:10:59.748735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:06.511 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.511 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:20:06.511 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.511 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.511 [2024-12-06 04:10:59.792830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:06.511 BaseBdev1 00:20:06.511 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.511 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:06.511 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:06.511 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:06.511 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:20:06.511 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:06.511 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:06.511 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:06.511 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.512 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.512 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.512 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:06.512 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.512 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.512 [ 00:20:06.512 { 00:20:06.512 "name": "BaseBdev1", 00:20:06.512 "aliases": [ 00:20:06.512 "b4d120c0-e7ba-4b4c-b70c-6cc58c0a93dd" 00:20:06.512 ], 00:20:06.512 "product_name": "Malloc disk", 00:20:06.512 "block_size": 4128, 00:20:06.512 "num_blocks": 8192, 00:20:06.512 "uuid": "b4d120c0-e7ba-4b4c-b70c-6cc58c0a93dd", 00:20:06.512 "md_size": 32, 00:20:06.512 "md_interleave": true, 00:20:06.512 "dif_type": 0, 00:20:06.512 "assigned_rate_limits": { 00:20:06.512 "rw_ios_per_sec": 0, 00:20:06.512 "rw_mbytes_per_sec": 0, 00:20:06.512 "r_mbytes_per_sec": 0, 00:20:06.512 "w_mbytes_per_sec": 0 00:20:06.512 }, 00:20:06.512 "claimed": true, 00:20:06.512 "claim_type": "exclusive_write", 00:20:06.512 "zoned": false, 00:20:06.512 "supported_io_types": { 00:20:06.512 "read": true, 00:20:06.512 "write": true, 00:20:06.512 "unmap": true, 00:20:06.512 "flush": true, 00:20:06.512 "reset": true, 00:20:06.512 "nvme_admin": false, 00:20:06.512 "nvme_io": false, 00:20:06.512 "nvme_io_md": false, 00:20:06.512 "write_zeroes": true, 00:20:06.512 "zcopy": true, 00:20:06.512 "get_zone_info": false, 00:20:06.512 "zone_management": false, 00:20:06.512 "zone_append": false, 00:20:06.512 "compare": false, 00:20:06.512 "compare_and_write": false, 00:20:06.512 "abort": true, 00:20:06.512 "seek_hole": false, 00:20:06.512 "seek_data": false, 00:20:06.512 "copy": true, 00:20:06.512 "nvme_iov_md": false 00:20:06.512 }, 00:20:06.512 "memory_domains": [ 00:20:06.512 { 00:20:06.512 "dma_device_id": "system", 00:20:06.512 "dma_device_type": 1 00:20:06.512 }, 00:20:06.512 { 00:20:06.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.512 "dma_device_type": 2 00:20:06.512 } 00:20:06.512 ], 00:20:06.512 "driver_specific": {} 00:20:06.512 } 00:20:06.512 ] 00:20:06.512 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.512 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:20:06.512 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:06.512 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:06.512 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:06.512 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:06.512 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:06.512 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:06.512 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.512 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.512 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.512 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.512 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.512 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.512 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.512 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:06.512 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.772 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.772 "name": "Existed_Raid", 00:20:06.772 "uuid": "300dc002-5932-4cdc-b313-6dc4276bfef5", 00:20:06.772 "strip_size_kb": 0, 00:20:06.772 "state": "configuring", 00:20:06.772 "raid_level": "raid1", 00:20:06.772 "superblock": true, 00:20:06.772 "num_base_bdevs": 2, 00:20:06.772 "num_base_bdevs_discovered": 1, 00:20:06.772 "num_base_bdevs_operational": 2, 00:20:06.772 "base_bdevs_list": [ 00:20:06.772 { 00:20:06.772 "name": "BaseBdev1", 00:20:06.772 "uuid": "b4d120c0-e7ba-4b4c-b70c-6cc58c0a93dd", 00:20:06.772 "is_configured": true, 00:20:06.772 "data_offset": 256, 00:20:06.772 "data_size": 7936 00:20:06.772 }, 00:20:06.772 { 00:20:06.772 "name": "BaseBdev2", 00:20:06.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.772 "is_configured": false, 00:20:06.772 "data_offset": 0, 00:20:06.772 "data_size": 0 00:20:06.772 } 00:20:06.772 ] 00:20:06.772 }' 00:20:06.772 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.772 04:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.032 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:07.032 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.032 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.032 [2024-12-06 04:11:00.256160] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:07.032 [2024-12-06 04:11:00.256228] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:07.032 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.032 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:07.032 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.032 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.032 [2024-12-06 04:11:00.268195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:07.032 [2024-12-06 04:11:00.269974] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:07.032 [2024-12-06 04:11:00.270015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:07.032 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.032 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:07.032 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:07.032 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:07.032 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:07.032 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:07.032 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:07.032 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:07.032 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:07.032 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.032 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.032 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.033 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.033 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.033 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:07.033 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.033 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.033 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.033 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.033 "name": "Existed_Raid", 00:20:07.033 "uuid": "b1b706c6-881c-4bb2-a353-9f0f6e3de955", 00:20:07.033 "strip_size_kb": 0, 00:20:07.033 "state": "configuring", 00:20:07.033 "raid_level": "raid1", 00:20:07.033 "superblock": true, 00:20:07.033 "num_base_bdevs": 2, 00:20:07.033 "num_base_bdevs_discovered": 1, 00:20:07.033 "num_base_bdevs_operational": 2, 00:20:07.033 "base_bdevs_list": [ 00:20:07.033 { 00:20:07.033 "name": "BaseBdev1", 00:20:07.033 "uuid": "b4d120c0-e7ba-4b4c-b70c-6cc58c0a93dd", 00:20:07.033 "is_configured": true, 00:20:07.033 "data_offset": 256, 00:20:07.033 "data_size": 7936 00:20:07.033 }, 00:20:07.033 { 00:20:07.033 "name": "BaseBdev2", 00:20:07.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.033 "is_configured": false, 00:20:07.033 "data_offset": 0, 00:20:07.033 "data_size": 0 00:20:07.033 } 00:20:07.033 ] 00:20:07.033 }' 00:20:07.033 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.033 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.604 [2024-12-06 04:11:00.705306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:07.604 [2024-12-06 04:11:00.705536] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:07.604 [2024-12-06 04:11:00.705550] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:07.604 [2024-12-06 04:11:00.705634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:07.604 [2024-12-06 04:11:00.705709] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:07.604 [2024-12-06 04:11:00.705734] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:07.604 [2024-12-06 04:11:00.705796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:07.604 BaseBdev2 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.604 [ 00:20:07.604 { 00:20:07.604 "name": "BaseBdev2", 00:20:07.604 "aliases": [ 00:20:07.604 "8d44db1e-7e36-4ef9-8292-02838e5f4aa8" 00:20:07.604 ], 00:20:07.604 "product_name": "Malloc disk", 00:20:07.604 "block_size": 4128, 00:20:07.604 "num_blocks": 8192, 00:20:07.604 "uuid": "8d44db1e-7e36-4ef9-8292-02838e5f4aa8", 00:20:07.604 "md_size": 32, 00:20:07.604 "md_interleave": true, 00:20:07.604 "dif_type": 0, 00:20:07.604 "assigned_rate_limits": { 00:20:07.604 "rw_ios_per_sec": 0, 00:20:07.604 "rw_mbytes_per_sec": 0, 00:20:07.604 "r_mbytes_per_sec": 0, 00:20:07.604 "w_mbytes_per_sec": 0 00:20:07.604 }, 00:20:07.604 "claimed": true, 00:20:07.604 "claim_type": "exclusive_write", 00:20:07.604 "zoned": false, 00:20:07.604 "supported_io_types": { 00:20:07.604 "read": true, 00:20:07.604 "write": true, 00:20:07.604 "unmap": true, 00:20:07.604 "flush": true, 00:20:07.604 "reset": true, 00:20:07.604 "nvme_admin": false, 00:20:07.604 "nvme_io": false, 00:20:07.604 "nvme_io_md": false, 00:20:07.604 "write_zeroes": true, 00:20:07.604 "zcopy": true, 00:20:07.604 "get_zone_info": false, 00:20:07.604 "zone_management": false, 00:20:07.604 "zone_append": false, 00:20:07.604 "compare": false, 00:20:07.604 "compare_and_write": false, 00:20:07.604 "abort": true, 00:20:07.604 "seek_hole": false, 00:20:07.604 "seek_data": false, 00:20:07.604 "copy": true, 00:20:07.604 "nvme_iov_md": false 00:20:07.604 }, 00:20:07.604 "memory_domains": [ 00:20:07.604 { 00:20:07.604 "dma_device_id": "system", 00:20:07.604 "dma_device_type": 1 00:20:07.604 }, 00:20:07.604 { 00:20:07.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.604 "dma_device_type": 2 00:20:07.604 } 00:20:07.604 ], 00:20:07.604 "driver_specific": {} 00:20:07.604 } 00:20:07.604 ] 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.604 "name": "Existed_Raid", 00:20:07.604 "uuid": "b1b706c6-881c-4bb2-a353-9f0f6e3de955", 00:20:07.604 "strip_size_kb": 0, 00:20:07.604 "state": "online", 00:20:07.604 "raid_level": "raid1", 00:20:07.604 "superblock": true, 00:20:07.604 "num_base_bdevs": 2, 00:20:07.604 "num_base_bdevs_discovered": 2, 00:20:07.604 "num_base_bdevs_operational": 2, 00:20:07.604 "base_bdevs_list": [ 00:20:07.604 { 00:20:07.604 "name": "BaseBdev1", 00:20:07.604 "uuid": "b4d120c0-e7ba-4b4c-b70c-6cc58c0a93dd", 00:20:07.604 "is_configured": true, 00:20:07.604 "data_offset": 256, 00:20:07.604 "data_size": 7936 00:20:07.604 }, 00:20:07.604 { 00:20:07.604 "name": "BaseBdev2", 00:20:07.604 "uuid": "8d44db1e-7e36-4ef9-8292-02838e5f4aa8", 00:20:07.604 "is_configured": true, 00:20:07.604 "data_offset": 256, 00:20:07.604 "data_size": 7936 00:20:07.604 } 00:20:07.604 ] 00:20:07.604 }' 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.604 04:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.864 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:07.864 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:07.864 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:07.864 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:07.864 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:07.864 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:07.864 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:07.864 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.864 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.864 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:07.864 [2024-12-06 04:11:01.144958] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:07.864 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.864 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:07.864 "name": "Existed_Raid", 00:20:07.864 "aliases": [ 00:20:07.864 "b1b706c6-881c-4bb2-a353-9f0f6e3de955" 00:20:07.864 ], 00:20:07.864 "product_name": "Raid Volume", 00:20:07.864 "block_size": 4128, 00:20:07.864 "num_blocks": 7936, 00:20:07.864 "uuid": "b1b706c6-881c-4bb2-a353-9f0f6e3de955", 00:20:07.864 "md_size": 32, 00:20:07.864 "md_interleave": true, 00:20:07.864 "dif_type": 0, 00:20:07.864 "assigned_rate_limits": { 00:20:07.864 "rw_ios_per_sec": 0, 00:20:07.864 "rw_mbytes_per_sec": 0, 00:20:07.864 "r_mbytes_per_sec": 0, 00:20:07.864 "w_mbytes_per_sec": 0 00:20:07.864 }, 00:20:07.864 "claimed": false, 00:20:07.864 "zoned": false, 00:20:07.864 "supported_io_types": { 00:20:07.864 "read": true, 00:20:07.864 "write": true, 00:20:07.864 "unmap": false, 00:20:07.864 "flush": false, 00:20:07.864 "reset": true, 00:20:07.864 "nvme_admin": false, 00:20:07.864 "nvme_io": false, 00:20:07.864 "nvme_io_md": false, 00:20:07.864 "write_zeroes": true, 00:20:07.864 "zcopy": false, 00:20:07.864 "get_zone_info": false, 00:20:07.864 "zone_management": false, 00:20:07.864 "zone_append": false, 00:20:07.865 "compare": false, 00:20:07.865 "compare_and_write": false, 00:20:07.865 "abort": false, 00:20:07.865 "seek_hole": false, 00:20:07.865 "seek_data": false, 00:20:07.865 "copy": false, 00:20:07.865 "nvme_iov_md": false 00:20:07.865 }, 00:20:07.865 "memory_domains": [ 00:20:07.865 { 00:20:07.865 "dma_device_id": "system", 00:20:07.865 "dma_device_type": 1 00:20:07.865 }, 00:20:07.865 { 00:20:07.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.865 "dma_device_type": 2 00:20:07.865 }, 00:20:07.865 { 00:20:07.865 "dma_device_id": "system", 00:20:07.865 "dma_device_type": 1 00:20:07.865 }, 00:20:07.865 { 00:20:07.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.865 "dma_device_type": 2 00:20:07.865 } 00:20:07.865 ], 00:20:07.865 "driver_specific": { 00:20:07.865 "raid": { 00:20:07.865 "uuid": "b1b706c6-881c-4bb2-a353-9f0f6e3de955", 00:20:07.865 "strip_size_kb": 0, 00:20:07.865 "state": "online", 00:20:07.865 "raid_level": "raid1", 00:20:07.865 "superblock": true, 00:20:07.865 "num_base_bdevs": 2, 00:20:07.865 "num_base_bdevs_discovered": 2, 00:20:07.865 "num_base_bdevs_operational": 2, 00:20:07.865 "base_bdevs_list": [ 00:20:07.865 { 00:20:07.865 "name": "BaseBdev1", 00:20:07.865 "uuid": "b4d120c0-e7ba-4b4c-b70c-6cc58c0a93dd", 00:20:07.865 "is_configured": true, 00:20:07.865 "data_offset": 256, 00:20:07.865 "data_size": 7936 00:20:07.865 }, 00:20:07.865 { 00:20:07.865 "name": "BaseBdev2", 00:20:07.865 "uuid": "8d44db1e-7e36-4ef9-8292-02838e5f4aa8", 00:20:07.865 "is_configured": true, 00:20:07.865 "data_offset": 256, 00:20:07.865 "data_size": 7936 00:20:07.865 } 00:20:07.865 ] 00:20:07.865 } 00:20:07.865 } 00:20:07.865 }' 00:20:07.865 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:07.865 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:07.865 BaseBdev2' 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.125 [2024-12-06 04:11:01.324339] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.125 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:08.126 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:08.126 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:08.126 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.126 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.126 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.126 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.126 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.126 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.126 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.126 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.126 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.126 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.126 "name": "Existed_Raid", 00:20:08.126 "uuid": "b1b706c6-881c-4bb2-a353-9f0f6e3de955", 00:20:08.126 "strip_size_kb": 0, 00:20:08.126 "state": "online", 00:20:08.126 "raid_level": "raid1", 00:20:08.126 "superblock": true, 00:20:08.126 "num_base_bdevs": 2, 00:20:08.126 "num_base_bdevs_discovered": 1, 00:20:08.126 "num_base_bdevs_operational": 1, 00:20:08.126 "base_bdevs_list": [ 00:20:08.126 { 00:20:08.126 "name": null, 00:20:08.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.126 "is_configured": false, 00:20:08.126 "data_offset": 0, 00:20:08.126 "data_size": 7936 00:20:08.126 }, 00:20:08.126 { 00:20:08.126 "name": "BaseBdev2", 00:20:08.126 "uuid": "8d44db1e-7e36-4ef9-8292-02838e5f4aa8", 00:20:08.126 "is_configured": true, 00:20:08.126 "data_offset": 256, 00:20:08.126 "data_size": 7936 00:20:08.126 } 00:20:08.126 ] 00:20:08.126 }' 00:20:08.126 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.126 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.697 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:08.697 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:08.697 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.697 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:08.697 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.697 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.697 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.697 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:08.697 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:08.697 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:08.697 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.697 04:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.697 [2024-12-06 04:11:01.940220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:08.697 [2024-12-06 04:11:01.940325] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:08.697 [2024-12-06 04:11:02.032815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:08.697 [2024-12-06 04:11:02.032887] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:08.697 [2024-12-06 04:11:02.032900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:08.697 04:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.697 04:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:08.697 04:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:08.697 04:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.697 04:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:08.697 04:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.697 04:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.962 04:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.962 04:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:08.962 04:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:08.962 04:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:08.962 04:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88727 00:20:08.962 04:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88727 ']' 00:20:08.962 04:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88727 00:20:08.962 04:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:08.963 04:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:08.963 04:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88727 00:20:08.963 04:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:08.963 04:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:08.963 killing process with pid 88727 00:20:08.963 04:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88727' 00:20:08.963 04:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88727 00:20:08.963 [2024-12-06 04:11:02.115390] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:08.963 04:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88727 00:20:08.963 [2024-12-06 04:11:02.132271] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:09.903 04:11:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:20:09.903 00:20:09.903 real 0m4.914s 00:20:09.903 user 0m7.114s 00:20:09.903 sys 0m0.819s 00:20:09.903 04:11:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:09.903 04:11:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.903 ************************************ 00:20:09.903 END TEST raid_state_function_test_sb_md_interleaved 00:20:09.903 ************************************ 00:20:10.163 04:11:03 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:20:10.163 04:11:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:10.163 04:11:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:10.163 04:11:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:10.163 ************************************ 00:20:10.163 START TEST raid_superblock_test_md_interleaved 00:20:10.163 ************************************ 00:20:10.163 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:20:10.163 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:10.163 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:10.163 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:10.163 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:10.163 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:10.163 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:10.163 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:10.163 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:10.163 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:10.163 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:10.163 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:10.164 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:10.164 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:10.164 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:10.164 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:10.164 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88974 00:20:10.164 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88974 00:20:10.164 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88974 ']' 00:20:10.164 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.164 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:10.164 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.164 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.164 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.164 04:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.164 [2024-12-06 04:11:03.380696] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:20:10.164 [2024-12-06 04:11:03.380810] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88974 ] 00:20:10.424 [2024-12-06 04:11:03.530502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.424 [2024-12-06 04:11:03.640570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.684 [2024-12-06 04:11:03.841432] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:10.684 [2024-12-06 04:11:03.841470] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.944 malloc1 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.944 [2024-12-06 04:11:04.227881] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:10.944 [2024-12-06 04:11:04.227948] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.944 [2024-12-06 04:11:04.227970] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:10.944 [2024-12-06 04:11:04.227979] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.944 [2024-12-06 04:11:04.229764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.944 [2024-12-06 04:11:04.229798] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:10.944 pt1 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.944 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.945 malloc2 00:20:10.945 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.945 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:10.945 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.945 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.945 [2024-12-06 04:11:04.282122] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:10.945 [2024-12-06 04:11:04.282170] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.945 [2024-12-06 04:11:04.282191] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:10.945 [2024-12-06 04:11:04.282199] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.945 [2024-12-06 04:11:04.283924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.945 [2024-12-06 04:11:04.283958] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:10.945 pt2 00:20:10.945 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.945 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:10.945 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:10.945 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:10.945 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.945 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.945 [2024-12-06 04:11:04.294132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:10.945 [2024-12-06 04:11:04.295835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:10.945 [2024-12-06 04:11:04.296035] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:10.945 [2024-12-06 04:11:04.296049] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:10.945 [2024-12-06 04:11:04.296134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:10.945 [2024-12-06 04:11:04.296204] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:10.945 [2024-12-06 04:11:04.296221] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:10.945 [2024-12-06 04:11:04.296286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.205 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.205 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:11.205 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:11.205 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:11.205 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:11.205 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:11.205 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:11.205 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.205 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.205 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.205 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.205 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.205 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.205 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.205 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.205 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.205 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.205 "name": "raid_bdev1", 00:20:11.205 "uuid": "a558a4b5-c1e7-44ca-bf42-9f1af44fdd10", 00:20:11.205 "strip_size_kb": 0, 00:20:11.205 "state": "online", 00:20:11.205 "raid_level": "raid1", 00:20:11.205 "superblock": true, 00:20:11.205 "num_base_bdevs": 2, 00:20:11.205 "num_base_bdevs_discovered": 2, 00:20:11.205 "num_base_bdevs_operational": 2, 00:20:11.205 "base_bdevs_list": [ 00:20:11.205 { 00:20:11.205 "name": "pt1", 00:20:11.205 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:11.205 "is_configured": true, 00:20:11.205 "data_offset": 256, 00:20:11.205 "data_size": 7936 00:20:11.205 }, 00:20:11.205 { 00:20:11.205 "name": "pt2", 00:20:11.205 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:11.205 "is_configured": true, 00:20:11.205 "data_offset": 256, 00:20:11.205 "data_size": 7936 00:20:11.205 } 00:20:11.205 ] 00:20:11.205 }' 00:20:11.205 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.205 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.466 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:11.466 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:11.466 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:11.466 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:11.466 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:11.466 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:11.466 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:11.466 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.466 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.466 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:11.466 [2024-12-06 04:11:04.745626] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:11.466 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.466 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:11.466 "name": "raid_bdev1", 00:20:11.466 "aliases": [ 00:20:11.466 "a558a4b5-c1e7-44ca-bf42-9f1af44fdd10" 00:20:11.466 ], 00:20:11.466 "product_name": "Raid Volume", 00:20:11.466 "block_size": 4128, 00:20:11.466 "num_blocks": 7936, 00:20:11.466 "uuid": "a558a4b5-c1e7-44ca-bf42-9f1af44fdd10", 00:20:11.466 "md_size": 32, 00:20:11.466 "md_interleave": true, 00:20:11.466 "dif_type": 0, 00:20:11.466 "assigned_rate_limits": { 00:20:11.466 "rw_ios_per_sec": 0, 00:20:11.466 "rw_mbytes_per_sec": 0, 00:20:11.466 "r_mbytes_per_sec": 0, 00:20:11.466 "w_mbytes_per_sec": 0 00:20:11.466 }, 00:20:11.466 "claimed": false, 00:20:11.466 "zoned": false, 00:20:11.466 "supported_io_types": { 00:20:11.466 "read": true, 00:20:11.466 "write": true, 00:20:11.466 "unmap": false, 00:20:11.466 "flush": false, 00:20:11.466 "reset": true, 00:20:11.466 "nvme_admin": false, 00:20:11.466 "nvme_io": false, 00:20:11.466 "nvme_io_md": false, 00:20:11.466 "write_zeroes": true, 00:20:11.466 "zcopy": false, 00:20:11.466 "get_zone_info": false, 00:20:11.466 "zone_management": false, 00:20:11.466 "zone_append": false, 00:20:11.466 "compare": false, 00:20:11.466 "compare_and_write": false, 00:20:11.466 "abort": false, 00:20:11.466 "seek_hole": false, 00:20:11.466 "seek_data": false, 00:20:11.466 "copy": false, 00:20:11.466 "nvme_iov_md": false 00:20:11.466 }, 00:20:11.466 "memory_domains": [ 00:20:11.466 { 00:20:11.466 "dma_device_id": "system", 00:20:11.466 "dma_device_type": 1 00:20:11.466 }, 00:20:11.466 { 00:20:11.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.466 "dma_device_type": 2 00:20:11.466 }, 00:20:11.466 { 00:20:11.466 "dma_device_id": "system", 00:20:11.466 "dma_device_type": 1 00:20:11.466 }, 00:20:11.466 { 00:20:11.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.466 "dma_device_type": 2 00:20:11.466 } 00:20:11.466 ], 00:20:11.466 "driver_specific": { 00:20:11.466 "raid": { 00:20:11.466 "uuid": "a558a4b5-c1e7-44ca-bf42-9f1af44fdd10", 00:20:11.466 "strip_size_kb": 0, 00:20:11.466 "state": "online", 00:20:11.466 "raid_level": "raid1", 00:20:11.466 "superblock": true, 00:20:11.466 "num_base_bdevs": 2, 00:20:11.466 "num_base_bdevs_discovered": 2, 00:20:11.466 "num_base_bdevs_operational": 2, 00:20:11.466 "base_bdevs_list": [ 00:20:11.466 { 00:20:11.466 "name": "pt1", 00:20:11.466 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:11.466 "is_configured": true, 00:20:11.466 "data_offset": 256, 00:20:11.466 "data_size": 7936 00:20:11.466 }, 00:20:11.466 { 00:20:11.466 "name": "pt2", 00:20:11.466 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:11.466 "is_configured": true, 00:20:11.466 "data_offset": 256, 00:20:11.466 "data_size": 7936 00:20:11.466 } 00:20:11.466 ] 00:20:11.466 } 00:20:11.466 } 00:20:11.466 }' 00:20:11.466 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:11.726 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:11.726 pt2' 00:20:11.726 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:11.726 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:11.726 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:11.726 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:11.726 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:11.726 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.726 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.726 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.726 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:11.726 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:11.726 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:11.726 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:11.726 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:11.726 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.726 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.726 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.726 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:11.726 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:11.726 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:11.726 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:11.726 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.727 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.727 [2024-12-06 04:11:04.973213] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:11.727 04:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.727 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a558a4b5-c1e7-44ca-bf42-9f1af44fdd10 00:20:11.727 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z a558a4b5-c1e7-44ca-bf42-9f1af44fdd10 ']' 00:20:11.727 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:11.727 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.727 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.727 [2024-12-06 04:11:05.020816] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:11.727 [2024-12-06 04:11:05.020843] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:11.727 [2024-12-06 04:11:05.020920] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:11.727 [2024-12-06 04:11:05.020995] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:11.727 [2024-12-06 04:11:05.021012] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:11.727 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.727 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.727 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:11.727 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.727 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.727 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.727 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:11.727 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:11.727 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:11.727 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:11.727 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.727 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.986 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.986 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:11.986 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:11.986 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.986 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.986 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.986 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:11.986 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.986 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.986 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:11.986 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.986 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:11.986 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:11.986 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:20:11.986 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:11.986 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:11.986 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:11.986 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:11.986 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:11.986 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:11.986 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.986 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.986 [2024-12-06 04:11:05.160669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:11.987 [2024-12-06 04:11:05.162613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:11.987 [2024-12-06 04:11:05.162702] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:11.987 [2024-12-06 04:11:05.162759] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:11.987 [2024-12-06 04:11:05.162774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:11.987 [2024-12-06 04:11:05.162784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:11.987 request: 00:20:11.987 { 00:20:11.987 "name": "raid_bdev1", 00:20:11.987 "raid_level": "raid1", 00:20:11.987 "base_bdevs": [ 00:20:11.987 "malloc1", 00:20:11.987 "malloc2" 00:20:11.987 ], 00:20:11.987 "superblock": false, 00:20:11.987 "method": "bdev_raid_create", 00:20:11.987 "req_id": 1 00:20:11.987 } 00:20:11.987 Got JSON-RPC error response 00:20:11.987 response: 00:20:11.987 { 00:20:11.987 "code": -17, 00:20:11.987 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:11.987 } 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.987 [2024-12-06 04:11:05.224540] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:11.987 [2024-12-06 04:11:05.224607] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:11.987 [2024-12-06 04:11:05.224627] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:11.987 [2024-12-06 04:11:05.224644] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:11.987 [2024-12-06 04:11:05.226743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:11.987 [2024-12-06 04:11:05.226779] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:11.987 [2024-12-06 04:11:05.226840] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:11.987 [2024-12-06 04:11:05.226909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:11.987 pt1 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.987 "name": "raid_bdev1", 00:20:11.987 "uuid": "a558a4b5-c1e7-44ca-bf42-9f1af44fdd10", 00:20:11.987 "strip_size_kb": 0, 00:20:11.987 "state": "configuring", 00:20:11.987 "raid_level": "raid1", 00:20:11.987 "superblock": true, 00:20:11.987 "num_base_bdevs": 2, 00:20:11.987 "num_base_bdevs_discovered": 1, 00:20:11.987 "num_base_bdevs_operational": 2, 00:20:11.987 "base_bdevs_list": [ 00:20:11.987 { 00:20:11.987 "name": "pt1", 00:20:11.987 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:11.987 "is_configured": true, 00:20:11.987 "data_offset": 256, 00:20:11.987 "data_size": 7936 00:20:11.987 }, 00:20:11.987 { 00:20:11.987 "name": null, 00:20:11.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:11.987 "is_configured": false, 00:20:11.987 "data_offset": 256, 00:20:11.987 "data_size": 7936 00:20:11.987 } 00:20:11.987 ] 00:20:11.987 }' 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.987 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.556 [2024-12-06 04:11:05.675739] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:12.556 [2024-12-06 04:11:05.675820] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:12.556 [2024-12-06 04:11:05.675843] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:12.556 [2024-12-06 04:11:05.675855] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:12.556 [2024-12-06 04:11:05.676042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:12.556 [2024-12-06 04:11:05.676076] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:12.556 [2024-12-06 04:11:05.676133] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:12.556 [2024-12-06 04:11:05.676159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:12.556 [2024-12-06 04:11:05.676252] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:12.556 [2024-12-06 04:11:05.676264] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:12.556 [2024-12-06 04:11:05.676354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:12.556 [2024-12-06 04:11:05.676423] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:12.556 [2024-12-06 04:11:05.676432] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:12.556 [2024-12-06 04:11:05.676500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:12.556 pt2 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.556 "name": "raid_bdev1", 00:20:12.556 "uuid": "a558a4b5-c1e7-44ca-bf42-9f1af44fdd10", 00:20:12.556 "strip_size_kb": 0, 00:20:12.556 "state": "online", 00:20:12.556 "raid_level": "raid1", 00:20:12.556 "superblock": true, 00:20:12.556 "num_base_bdevs": 2, 00:20:12.556 "num_base_bdevs_discovered": 2, 00:20:12.556 "num_base_bdevs_operational": 2, 00:20:12.556 "base_bdevs_list": [ 00:20:12.556 { 00:20:12.556 "name": "pt1", 00:20:12.556 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:12.556 "is_configured": true, 00:20:12.556 "data_offset": 256, 00:20:12.556 "data_size": 7936 00:20:12.556 }, 00:20:12.556 { 00:20:12.556 "name": "pt2", 00:20:12.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:12.556 "is_configured": true, 00:20:12.556 "data_offset": 256, 00:20:12.556 "data_size": 7936 00:20:12.556 } 00:20:12.556 ] 00:20:12.556 }' 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.556 04:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.815 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:12.815 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:12.815 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:12.815 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:12.815 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:12.815 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:12.815 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:12.815 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.815 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.815 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:12.815 [2024-12-06 04:11:06.143226] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:12.815 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.075 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:13.075 "name": "raid_bdev1", 00:20:13.075 "aliases": [ 00:20:13.075 "a558a4b5-c1e7-44ca-bf42-9f1af44fdd10" 00:20:13.075 ], 00:20:13.075 "product_name": "Raid Volume", 00:20:13.075 "block_size": 4128, 00:20:13.075 "num_blocks": 7936, 00:20:13.075 "uuid": "a558a4b5-c1e7-44ca-bf42-9f1af44fdd10", 00:20:13.075 "md_size": 32, 00:20:13.075 "md_interleave": true, 00:20:13.075 "dif_type": 0, 00:20:13.075 "assigned_rate_limits": { 00:20:13.075 "rw_ios_per_sec": 0, 00:20:13.075 "rw_mbytes_per_sec": 0, 00:20:13.075 "r_mbytes_per_sec": 0, 00:20:13.075 "w_mbytes_per_sec": 0 00:20:13.075 }, 00:20:13.075 "claimed": false, 00:20:13.075 "zoned": false, 00:20:13.075 "supported_io_types": { 00:20:13.075 "read": true, 00:20:13.075 "write": true, 00:20:13.075 "unmap": false, 00:20:13.075 "flush": false, 00:20:13.075 "reset": true, 00:20:13.075 "nvme_admin": false, 00:20:13.075 "nvme_io": false, 00:20:13.075 "nvme_io_md": false, 00:20:13.075 "write_zeroes": true, 00:20:13.075 "zcopy": false, 00:20:13.075 "get_zone_info": false, 00:20:13.075 "zone_management": false, 00:20:13.075 "zone_append": false, 00:20:13.075 "compare": false, 00:20:13.075 "compare_and_write": false, 00:20:13.075 "abort": false, 00:20:13.075 "seek_hole": false, 00:20:13.075 "seek_data": false, 00:20:13.075 "copy": false, 00:20:13.075 "nvme_iov_md": false 00:20:13.075 }, 00:20:13.075 "memory_domains": [ 00:20:13.075 { 00:20:13.075 "dma_device_id": "system", 00:20:13.075 "dma_device_type": 1 00:20:13.075 }, 00:20:13.075 { 00:20:13.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.075 "dma_device_type": 2 00:20:13.075 }, 00:20:13.075 { 00:20:13.075 "dma_device_id": "system", 00:20:13.075 "dma_device_type": 1 00:20:13.075 }, 00:20:13.075 { 00:20:13.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.075 "dma_device_type": 2 00:20:13.075 } 00:20:13.075 ], 00:20:13.075 "driver_specific": { 00:20:13.075 "raid": { 00:20:13.075 "uuid": "a558a4b5-c1e7-44ca-bf42-9f1af44fdd10", 00:20:13.075 "strip_size_kb": 0, 00:20:13.075 "state": "online", 00:20:13.075 "raid_level": "raid1", 00:20:13.075 "superblock": true, 00:20:13.075 "num_base_bdevs": 2, 00:20:13.075 "num_base_bdevs_discovered": 2, 00:20:13.076 "num_base_bdevs_operational": 2, 00:20:13.076 "base_bdevs_list": [ 00:20:13.076 { 00:20:13.076 "name": "pt1", 00:20:13.076 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:13.076 "is_configured": true, 00:20:13.076 "data_offset": 256, 00:20:13.076 "data_size": 7936 00:20:13.076 }, 00:20:13.076 { 00:20:13.076 "name": "pt2", 00:20:13.076 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:13.076 "is_configured": true, 00:20:13.076 "data_offset": 256, 00:20:13.076 "data_size": 7936 00:20:13.076 } 00:20:13.076 ] 00:20:13.076 } 00:20:13.076 } 00:20:13.076 }' 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:13.076 pt2' 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.076 [2024-12-06 04:11:06.354806] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' a558a4b5-c1e7-44ca-bf42-9f1af44fdd10 '!=' a558a4b5-c1e7-44ca-bf42-9f1af44fdd10 ']' 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.076 [2024-12-06 04:11:06.390511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.076 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.336 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.336 "name": "raid_bdev1", 00:20:13.336 "uuid": "a558a4b5-c1e7-44ca-bf42-9f1af44fdd10", 00:20:13.336 "strip_size_kb": 0, 00:20:13.336 "state": "online", 00:20:13.336 "raid_level": "raid1", 00:20:13.336 "superblock": true, 00:20:13.336 "num_base_bdevs": 2, 00:20:13.336 "num_base_bdevs_discovered": 1, 00:20:13.336 "num_base_bdevs_operational": 1, 00:20:13.336 "base_bdevs_list": [ 00:20:13.336 { 00:20:13.336 "name": null, 00:20:13.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.336 "is_configured": false, 00:20:13.336 "data_offset": 0, 00:20:13.336 "data_size": 7936 00:20:13.336 }, 00:20:13.336 { 00:20:13.336 "name": "pt2", 00:20:13.336 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:13.336 "is_configured": true, 00:20:13.336 "data_offset": 256, 00:20:13.336 "data_size": 7936 00:20:13.336 } 00:20:13.336 ] 00:20:13.336 }' 00:20:13.336 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.336 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.596 [2024-12-06 04:11:06.789856] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:13.596 [2024-12-06 04:11:06.789890] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:13.596 [2024-12-06 04:11:06.789972] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:13.596 [2024-12-06 04:11:06.790020] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:13.596 [2024-12-06 04:11:06.790032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.596 [2024-12-06 04:11:06.861715] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:13.596 [2024-12-06 04:11:06.861774] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.596 [2024-12-06 04:11:06.861790] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:13.596 [2024-12-06 04:11:06.861801] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.596 [2024-12-06 04:11:06.863695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.596 [2024-12-06 04:11:06.863737] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:13.596 [2024-12-06 04:11:06.863789] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:13.596 [2024-12-06 04:11:06.863841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:13.596 [2024-12-06 04:11:06.863907] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:13.596 [2024-12-06 04:11:06.863919] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:13.596 [2024-12-06 04:11:06.864006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:13.596 [2024-12-06 04:11:06.864093] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:13.596 [2024-12-06 04:11:06.864102] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:13.596 [2024-12-06 04:11:06.864165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.596 pt2 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.596 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.597 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.597 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.597 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.597 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.597 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.597 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.597 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.597 "name": "raid_bdev1", 00:20:13.597 "uuid": "a558a4b5-c1e7-44ca-bf42-9f1af44fdd10", 00:20:13.597 "strip_size_kb": 0, 00:20:13.597 "state": "online", 00:20:13.597 "raid_level": "raid1", 00:20:13.597 "superblock": true, 00:20:13.597 "num_base_bdevs": 2, 00:20:13.597 "num_base_bdevs_discovered": 1, 00:20:13.597 "num_base_bdevs_operational": 1, 00:20:13.597 "base_bdevs_list": [ 00:20:13.597 { 00:20:13.597 "name": null, 00:20:13.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.597 "is_configured": false, 00:20:13.597 "data_offset": 256, 00:20:13.597 "data_size": 7936 00:20:13.597 }, 00:20:13.597 { 00:20:13.597 "name": "pt2", 00:20:13.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:13.597 "is_configured": true, 00:20:13.597 "data_offset": 256, 00:20:13.597 "data_size": 7936 00:20:13.597 } 00:20:13.597 ] 00:20:13.597 }' 00:20:13.597 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.597 04:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.166 [2024-12-06 04:11:07.304967] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:14.166 [2024-12-06 04:11:07.305061] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:14.166 [2024-12-06 04:11:07.305166] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:14.166 [2024-12-06 04:11:07.305247] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:14.166 [2024-12-06 04:11:07.305292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.166 [2024-12-06 04:11:07.368852] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:14.166 [2024-12-06 04:11:07.368953] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:14.166 [2024-12-06 04:11:07.368997] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:14.166 [2024-12-06 04:11:07.369009] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:14.166 [2024-12-06 04:11:07.370897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:14.166 [2024-12-06 04:11:07.370938] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:14.166 [2024-12-06 04:11:07.370992] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:14.166 [2024-12-06 04:11:07.371061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:14.166 [2024-12-06 04:11:07.371162] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:14.166 [2024-12-06 04:11:07.371172] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:14.166 [2024-12-06 04:11:07.371191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:14.166 [2024-12-06 04:11:07.371261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:14.166 [2024-12-06 04:11:07.371341] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:14.166 [2024-12-06 04:11:07.371351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:14.166 [2024-12-06 04:11:07.371418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:14.166 pt1 00:20:14.166 [2024-12-06 04:11:07.371487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:14.166 [2024-12-06 04:11:07.371502] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:14.166 [2024-12-06 04:11:07.371574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.166 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.167 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.167 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.167 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.167 "name": "raid_bdev1", 00:20:14.167 "uuid": "a558a4b5-c1e7-44ca-bf42-9f1af44fdd10", 00:20:14.167 "strip_size_kb": 0, 00:20:14.167 "state": "online", 00:20:14.167 "raid_level": "raid1", 00:20:14.167 "superblock": true, 00:20:14.167 "num_base_bdevs": 2, 00:20:14.167 "num_base_bdevs_discovered": 1, 00:20:14.167 "num_base_bdevs_operational": 1, 00:20:14.167 "base_bdevs_list": [ 00:20:14.167 { 00:20:14.167 "name": null, 00:20:14.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.167 "is_configured": false, 00:20:14.167 "data_offset": 256, 00:20:14.167 "data_size": 7936 00:20:14.167 }, 00:20:14.167 { 00:20:14.167 "name": "pt2", 00:20:14.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:14.167 "is_configured": true, 00:20:14.167 "data_offset": 256, 00:20:14.167 "data_size": 7936 00:20:14.167 } 00:20:14.167 ] 00:20:14.167 }' 00:20:14.167 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.167 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.735 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:14.735 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:14.735 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.735 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.735 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.735 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:14.735 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:14.735 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:14.735 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.735 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.735 [2024-12-06 04:11:07.836300] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:14.735 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.735 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' a558a4b5-c1e7-44ca-bf42-9f1af44fdd10 '!=' a558a4b5-c1e7-44ca-bf42-9f1af44fdd10 ']' 00:20:14.735 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88974 00:20:14.735 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88974 ']' 00:20:14.735 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88974 00:20:14.735 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:14.735 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:14.735 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88974 00:20:14.735 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:14.735 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:14.735 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88974' 00:20:14.735 killing process with pid 88974 00:20:14.735 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88974 00:20:14.735 [2024-12-06 04:11:07.924644] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:14.736 [2024-12-06 04:11:07.924752] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:14.736 [2024-12-06 04:11:07.924803] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:14.736 [2024-12-06 04:11:07.924818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:14.736 04:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88974 00:20:14.996 [2024-12-06 04:11:08.130767] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:15.935 04:11:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:20:15.935 00:20:15.935 real 0m5.958s 00:20:15.935 user 0m9.019s 00:20:15.935 sys 0m1.031s 00:20:15.935 04:11:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:15.935 04:11:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.935 ************************************ 00:20:15.935 END TEST raid_superblock_test_md_interleaved 00:20:15.935 ************************************ 00:20:16.194 04:11:09 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:20:16.194 04:11:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:16.194 04:11:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.194 04:11:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:16.194 ************************************ 00:20:16.194 START TEST raid_rebuild_test_sb_md_interleaved 00:20:16.194 ************************************ 00:20:16.194 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:20:16.194 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:16.194 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:16.194 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:16.194 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:16.194 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:20:16.194 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:16.194 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:16.194 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:16.194 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:16.194 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:16.195 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:16.195 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:16.195 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:16.195 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:16.195 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:16.195 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:16.195 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:16.195 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:16.195 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:16.195 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:16.195 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:16.195 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:16.195 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:16.195 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:16.195 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89298 00:20:16.195 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:16.195 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89298 00:20:16.195 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89298 ']' 00:20:16.195 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.195 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.195 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.195 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.195 04:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.195 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:16.195 Zero copy mechanism will not be used. 00:20:16.195 [2024-12-06 04:11:09.416290] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:20:16.195 [2024-12-06 04:11:09.416422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89298 ] 00:20:16.628 [2024-12-06 04:11:09.589403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.628 [2024-12-06 04:11:09.705859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.628 [2024-12-06 04:11:09.898206] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:16.628 [2024-12-06 04:11:09.898269] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.283 BaseBdev1_malloc 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.283 [2024-12-06 04:11:10.339015] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:17.283 [2024-12-06 04:11:10.339146] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.283 [2024-12-06 04:11:10.339173] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:17.283 [2024-12-06 04:11:10.339184] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.283 [2024-12-06 04:11:10.341086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.283 [2024-12-06 04:11:10.341129] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:17.283 BaseBdev1 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.283 BaseBdev2_malloc 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.283 [2024-12-06 04:11:10.394090] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:17.283 [2024-12-06 04:11:10.394170] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.283 [2024-12-06 04:11:10.394191] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:17.283 [2024-12-06 04:11:10.394204] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.283 [2024-12-06 04:11:10.396049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.283 [2024-12-06 04:11:10.396094] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:17.283 BaseBdev2 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.283 spare_malloc 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.283 spare_delay 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.283 [2024-12-06 04:11:10.471871] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:17.283 [2024-12-06 04:11:10.471938] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.283 [2024-12-06 04:11:10.471961] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:17.283 [2024-12-06 04:11:10.471972] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.283 [2024-12-06 04:11:10.473877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.283 [2024-12-06 04:11:10.473932] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:17.283 spare 00:20:17.283 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.284 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:17.284 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.284 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.284 [2024-12-06 04:11:10.483898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:17.284 [2024-12-06 04:11:10.485871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:17.284 [2024-12-06 04:11:10.486120] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:17.284 [2024-12-06 04:11:10.486137] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:17.284 [2024-12-06 04:11:10.486220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:17.284 [2024-12-06 04:11:10.486303] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:17.284 [2024-12-06 04:11:10.486312] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:17.284 [2024-12-06 04:11:10.486384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.284 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.284 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:17.284 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:17.284 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:17.284 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:17.284 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:17.284 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:17.284 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.284 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.284 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.284 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.284 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.284 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.284 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.284 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.284 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.284 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.284 "name": "raid_bdev1", 00:20:17.284 "uuid": "febd8940-d513-4907-998e-0fa93c290af2", 00:20:17.284 "strip_size_kb": 0, 00:20:17.284 "state": "online", 00:20:17.284 "raid_level": "raid1", 00:20:17.284 "superblock": true, 00:20:17.284 "num_base_bdevs": 2, 00:20:17.284 "num_base_bdevs_discovered": 2, 00:20:17.284 "num_base_bdevs_operational": 2, 00:20:17.284 "base_bdevs_list": [ 00:20:17.284 { 00:20:17.284 "name": "BaseBdev1", 00:20:17.284 "uuid": "925aae78-96e8-5742-8795-f95eb454b442", 00:20:17.284 "is_configured": true, 00:20:17.284 "data_offset": 256, 00:20:17.284 "data_size": 7936 00:20:17.284 }, 00:20:17.284 { 00:20:17.284 "name": "BaseBdev2", 00:20:17.284 "uuid": "58245152-ebf0-5c42-b18c-255f6951fe25", 00:20:17.284 "is_configured": true, 00:20:17.284 "data_offset": 256, 00:20:17.284 "data_size": 7936 00:20:17.284 } 00:20:17.284 ] 00:20:17.284 }' 00:20:17.284 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.284 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.911 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:17.911 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:17.911 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.911 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.911 [2024-12-06 04:11:10.927432] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:17.911 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.911 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:17.911 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.911 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.911 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.911 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:17.911 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.911 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:17.911 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:17.911 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:20:17.911 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:17.911 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.911 04:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.911 [2024-12-06 04:11:11.002988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:17.911 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.911 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:17.911 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:17.911 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:17.911 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:17.911 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:17.911 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:17.911 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.911 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.911 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.911 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.911 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.911 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.911 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.911 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.911 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.911 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.911 "name": "raid_bdev1", 00:20:17.911 "uuid": "febd8940-d513-4907-998e-0fa93c290af2", 00:20:17.911 "strip_size_kb": 0, 00:20:17.911 "state": "online", 00:20:17.911 "raid_level": "raid1", 00:20:17.911 "superblock": true, 00:20:17.911 "num_base_bdevs": 2, 00:20:17.911 "num_base_bdevs_discovered": 1, 00:20:17.911 "num_base_bdevs_operational": 1, 00:20:17.911 "base_bdevs_list": [ 00:20:17.911 { 00:20:17.911 "name": null, 00:20:17.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.911 "is_configured": false, 00:20:17.911 "data_offset": 0, 00:20:17.911 "data_size": 7936 00:20:17.911 }, 00:20:17.912 { 00:20:17.912 "name": "BaseBdev2", 00:20:17.912 "uuid": "58245152-ebf0-5c42-b18c-255f6951fe25", 00:20:17.912 "is_configured": true, 00:20:17.912 "data_offset": 256, 00:20:17.912 "data_size": 7936 00:20:17.912 } 00:20:17.912 ] 00:20:17.912 }' 00:20:17.912 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.912 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.239 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:18.239 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.239 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.239 [2024-12-06 04:11:11.398328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:18.239 [2024-12-06 04:11:11.413426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:18.239 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.239 04:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:18.239 [2024-12-06 04:11:11.415421] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:19.213 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:19.213 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.213 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:19.213 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:19.213 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.213 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.213 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.213 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.213 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.213 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.213 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.213 "name": "raid_bdev1", 00:20:19.213 "uuid": "febd8940-d513-4907-998e-0fa93c290af2", 00:20:19.213 "strip_size_kb": 0, 00:20:19.213 "state": "online", 00:20:19.213 "raid_level": "raid1", 00:20:19.213 "superblock": true, 00:20:19.213 "num_base_bdevs": 2, 00:20:19.213 "num_base_bdevs_discovered": 2, 00:20:19.213 "num_base_bdevs_operational": 2, 00:20:19.213 "process": { 00:20:19.213 "type": "rebuild", 00:20:19.213 "target": "spare", 00:20:19.213 "progress": { 00:20:19.213 "blocks": 2560, 00:20:19.213 "percent": 32 00:20:19.213 } 00:20:19.213 }, 00:20:19.213 "base_bdevs_list": [ 00:20:19.213 { 00:20:19.213 "name": "spare", 00:20:19.213 "uuid": "0e3479a1-28ca-5885-9ab2-23862c267b3b", 00:20:19.213 "is_configured": true, 00:20:19.213 "data_offset": 256, 00:20:19.213 "data_size": 7936 00:20:19.213 }, 00:20:19.213 { 00:20:19.213 "name": "BaseBdev2", 00:20:19.213 "uuid": "58245152-ebf0-5c42-b18c-255f6951fe25", 00:20:19.213 "is_configured": true, 00:20:19.213 "data_offset": 256, 00:20:19.213 "data_size": 7936 00:20:19.213 } 00:20:19.213 ] 00:20:19.213 }' 00:20:19.213 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.213 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:19.213 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.491 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:19.491 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:19.491 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.491 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.491 [2024-12-06 04:11:12.578525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:19.491 [2024-12-06 04:11:12.620533] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:19.491 [2024-12-06 04:11:12.620594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.491 [2024-12-06 04:11:12.620609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:19.491 [2024-12-06 04:11:12.620621] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:19.491 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.491 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:19.491 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:19.491 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:19.491 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:19.491 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:19.491 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:19.491 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.491 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.491 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.491 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.491 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.491 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.491 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.491 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.491 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.491 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.491 "name": "raid_bdev1", 00:20:19.491 "uuid": "febd8940-d513-4907-998e-0fa93c290af2", 00:20:19.491 "strip_size_kb": 0, 00:20:19.491 "state": "online", 00:20:19.491 "raid_level": "raid1", 00:20:19.491 "superblock": true, 00:20:19.491 "num_base_bdevs": 2, 00:20:19.491 "num_base_bdevs_discovered": 1, 00:20:19.491 "num_base_bdevs_operational": 1, 00:20:19.491 "base_bdevs_list": [ 00:20:19.491 { 00:20:19.491 "name": null, 00:20:19.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.491 "is_configured": false, 00:20:19.491 "data_offset": 0, 00:20:19.491 "data_size": 7936 00:20:19.491 }, 00:20:19.491 { 00:20:19.491 "name": "BaseBdev2", 00:20:19.491 "uuid": "58245152-ebf0-5c42-b18c-255f6951fe25", 00:20:19.491 "is_configured": true, 00:20:19.491 "data_offset": 256, 00:20:19.491 "data_size": 7936 00:20:19.491 } 00:20:19.491 ] 00:20:19.491 }' 00:20:19.491 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.491 04:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.750 04:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:19.751 04:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.751 04:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:19.751 04:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:19.751 04:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.751 04:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.751 04:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.751 04:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.751 04:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.009 04:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.009 04:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.009 "name": "raid_bdev1", 00:20:20.010 "uuid": "febd8940-d513-4907-998e-0fa93c290af2", 00:20:20.010 "strip_size_kb": 0, 00:20:20.010 "state": "online", 00:20:20.010 "raid_level": "raid1", 00:20:20.010 "superblock": true, 00:20:20.010 "num_base_bdevs": 2, 00:20:20.010 "num_base_bdevs_discovered": 1, 00:20:20.010 "num_base_bdevs_operational": 1, 00:20:20.010 "base_bdevs_list": [ 00:20:20.010 { 00:20:20.010 "name": null, 00:20:20.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.010 "is_configured": false, 00:20:20.010 "data_offset": 0, 00:20:20.010 "data_size": 7936 00:20:20.010 }, 00:20:20.010 { 00:20:20.010 "name": "BaseBdev2", 00:20:20.010 "uuid": "58245152-ebf0-5c42-b18c-255f6951fe25", 00:20:20.010 "is_configured": true, 00:20:20.010 "data_offset": 256, 00:20:20.010 "data_size": 7936 00:20:20.010 } 00:20:20.010 ] 00:20:20.010 }' 00:20:20.010 04:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.010 04:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:20.010 04:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:20.010 04:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:20.010 04:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:20.010 04:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.010 04:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.010 [2024-12-06 04:11:13.222192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:20.010 [2024-12-06 04:11:13.237985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:20.010 04:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.010 04:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:20.010 [2024-12-06 04:11:13.239766] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:20.946 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.946 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.946 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:20.946 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:20.946 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.946 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.946 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.946 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.946 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.946 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.226 "name": "raid_bdev1", 00:20:21.226 "uuid": "febd8940-d513-4907-998e-0fa93c290af2", 00:20:21.226 "strip_size_kb": 0, 00:20:21.226 "state": "online", 00:20:21.226 "raid_level": "raid1", 00:20:21.226 "superblock": true, 00:20:21.226 "num_base_bdevs": 2, 00:20:21.226 "num_base_bdevs_discovered": 2, 00:20:21.226 "num_base_bdevs_operational": 2, 00:20:21.226 "process": { 00:20:21.226 "type": "rebuild", 00:20:21.226 "target": "spare", 00:20:21.226 "progress": { 00:20:21.226 "blocks": 2560, 00:20:21.226 "percent": 32 00:20:21.226 } 00:20:21.226 }, 00:20:21.226 "base_bdevs_list": [ 00:20:21.226 { 00:20:21.226 "name": "spare", 00:20:21.226 "uuid": "0e3479a1-28ca-5885-9ab2-23862c267b3b", 00:20:21.226 "is_configured": true, 00:20:21.226 "data_offset": 256, 00:20:21.226 "data_size": 7936 00:20:21.226 }, 00:20:21.226 { 00:20:21.226 "name": "BaseBdev2", 00:20:21.226 "uuid": "58245152-ebf0-5c42-b18c-255f6951fe25", 00:20:21.226 "is_configured": true, 00:20:21.226 "data_offset": 256, 00:20:21.226 "data_size": 7936 00:20:21.226 } 00:20:21.226 ] 00:20:21.226 }' 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:21.226 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=752 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.226 "name": "raid_bdev1", 00:20:21.226 "uuid": "febd8940-d513-4907-998e-0fa93c290af2", 00:20:21.226 "strip_size_kb": 0, 00:20:21.226 "state": "online", 00:20:21.226 "raid_level": "raid1", 00:20:21.226 "superblock": true, 00:20:21.226 "num_base_bdevs": 2, 00:20:21.226 "num_base_bdevs_discovered": 2, 00:20:21.226 "num_base_bdevs_operational": 2, 00:20:21.226 "process": { 00:20:21.226 "type": "rebuild", 00:20:21.226 "target": "spare", 00:20:21.226 "progress": { 00:20:21.226 "blocks": 2816, 00:20:21.226 "percent": 35 00:20:21.226 } 00:20:21.226 }, 00:20:21.226 "base_bdevs_list": [ 00:20:21.226 { 00:20:21.226 "name": "spare", 00:20:21.226 "uuid": "0e3479a1-28ca-5885-9ab2-23862c267b3b", 00:20:21.226 "is_configured": true, 00:20:21.226 "data_offset": 256, 00:20:21.226 "data_size": 7936 00:20:21.226 }, 00:20:21.226 { 00:20:21.226 "name": "BaseBdev2", 00:20:21.226 "uuid": "58245152-ebf0-5c42-b18c-255f6951fe25", 00:20:21.226 "is_configured": true, 00:20:21.226 "data_offset": 256, 00:20:21.226 "data_size": 7936 00:20:21.226 } 00:20:21.226 ] 00:20:21.226 }' 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:21.226 04:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:22.604 04:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:22.604 04:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:22.604 04:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:22.604 04:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:22.604 04:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:22.604 04:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:22.604 04:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.604 04:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.604 04:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.604 04:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.604 04:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.604 04:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:22.604 "name": "raid_bdev1", 00:20:22.604 "uuid": "febd8940-d513-4907-998e-0fa93c290af2", 00:20:22.604 "strip_size_kb": 0, 00:20:22.604 "state": "online", 00:20:22.604 "raid_level": "raid1", 00:20:22.604 "superblock": true, 00:20:22.604 "num_base_bdevs": 2, 00:20:22.604 "num_base_bdevs_discovered": 2, 00:20:22.604 "num_base_bdevs_operational": 2, 00:20:22.604 "process": { 00:20:22.604 "type": "rebuild", 00:20:22.604 "target": "spare", 00:20:22.604 "progress": { 00:20:22.604 "blocks": 5632, 00:20:22.604 "percent": 70 00:20:22.604 } 00:20:22.604 }, 00:20:22.604 "base_bdevs_list": [ 00:20:22.604 { 00:20:22.604 "name": "spare", 00:20:22.604 "uuid": "0e3479a1-28ca-5885-9ab2-23862c267b3b", 00:20:22.604 "is_configured": true, 00:20:22.604 "data_offset": 256, 00:20:22.604 "data_size": 7936 00:20:22.604 }, 00:20:22.604 { 00:20:22.604 "name": "BaseBdev2", 00:20:22.604 "uuid": "58245152-ebf0-5c42-b18c-255f6951fe25", 00:20:22.604 "is_configured": true, 00:20:22.604 "data_offset": 256, 00:20:22.604 "data_size": 7936 00:20:22.604 } 00:20:22.604 ] 00:20:22.604 }' 00:20:22.604 04:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:22.604 04:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:22.604 04:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:22.604 04:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:22.604 04:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:23.172 [2024-12-06 04:11:16.352971] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:23.172 [2024-12-06 04:11:16.353131] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:23.172 [2024-12-06 04:11:16.353282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.432 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:23.432 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:23.432 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.432 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:23.432 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:23.432 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.432 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.432 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.432 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.432 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.432 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.432 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.432 "name": "raid_bdev1", 00:20:23.432 "uuid": "febd8940-d513-4907-998e-0fa93c290af2", 00:20:23.432 "strip_size_kb": 0, 00:20:23.432 "state": "online", 00:20:23.432 "raid_level": "raid1", 00:20:23.432 "superblock": true, 00:20:23.432 "num_base_bdevs": 2, 00:20:23.432 "num_base_bdevs_discovered": 2, 00:20:23.432 "num_base_bdevs_operational": 2, 00:20:23.432 "base_bdevs_list": [ 00:20:23.432 { 00:20:23.432 "name": "spare", 00:20:23.432 "uuid": "0e3479a1-28ca-5885-9ab2-23862c267b3b", 00:20:23.432 "is_configured": true, 00:20:23.432 "data_offset": 256, 00:20:23.432 "data_size": 7936 00:20:23.432 }, 00:20:23.432 { 00:20:23.432 "name": "BaseBdev2", 00:20:23.432 "uuid": "58245152-ebf0-5c42-b18c-255f6951fe25", 00:20:23.432 "is_configured": true, 00:20:23.432 "data_offset": 256, 00:20:23.433 "data_size": 7936 00:20:23.433 } 00:20:23.433 ] 00:20:23.433 }' 00:20:23.433 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.433 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:23.433 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.692 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.693 "name": "raid_bdev1", 00:20:23.693 "uuid": "febd8940-d513-4907-998e-0fa93c290af2", 00:20:23.693 "strip_size_kb": 0, 00:20:23.693 "state": "online", 00:20:23.693 "raid_level": "raid1", 00:20:23.693 "superblock": true, 00:20:23.693 "num_base_bdevs": 2, 00:20:23.693 "num_base_bdevs_discovered": 2, 00:20:23.693 "num_base_bdevs_operational": 2, 00:20:23.693 "base_bdevs_list": [ 00:20:23.693 { 00:20:23.693 "name": "spare", 00:20:23.693 "uuid": "0e3479a1-28ca-5885-9ab2-23862c267b3b", 00:20:23.693 "is_configured": true, 00:20:23.693 "data_offset": 256, 00:20:23.693 "data_size": 7936 00:20:23.693 }, 00:20:23.693 { 00:20:23.693 "name": "BaseBdev2", 00:20:23.693 "uuid": "58245152-ebf0-5c42-b18c-255f6951fe25", 00:20:23.693 "is_configured": true, 00:20:23.693 "data_offset": 256, 00:20:23.693 "data_size": 7936 00:20:23.693 } 00:20:23.693 ] 00:20:23.693 }' 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.693 04:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.693 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.693 "name": "raid_bdev1", 00:20:23.693 "uuid": "febd8940-d513-4907-998e-0fa93c290af2", 00:20:23.693 "strip_size_kb": 0, 00:20:23.693 "state": "online", 00:20:23.693 "raid_level": "raid1", 00:20:23.693 "superblock": true, 00:20:23.693 "num_base_bdevs": 2, 00:20:23.693 "num_base_bdevs_discovered": 2, 00:20:23.693 "num_base_bdevs_operational": 2, 00:20:23.693 "base_bdevs_list": [ 00:20:23.693 { 00:20:23.693 "name": "spare", 00:20:23.693 "uuid": "0e3479a1-28ca-5885-9ab2-23862c267b3b", 00:20:23.693 "is_configured": true, 00:20:23.693 "data_offset": 256, 00:20:23.693 "data_size": 7936 00:20:23.693 }, 00:20:23.693 { 00:20:23.693 "name": "BaseBdev2", 00:20:23.693 "uuid": "58245152-ebf0-5c42-b18c-255f6951fe25", 00:20:23.693 "is_configured": true, 00:20:23.693 "data_offset": 256, 00:20:23.693 "data_size": 7936 00:20:23.693 } 00:20:23.693 ] 00:20:23.693 }' 00:20:23.693 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.693 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.266 [2024-12-06 04:11:17.392564] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:24.266 [2024-12-06 04:11:17.392686] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:24.266 [2024-12-06 04:11:17.392807] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:24.266 [2024-12-06 04:11:17.392922] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:24.266 [2024-12-06 04:11:17.392975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.266 [2024-12-06 04:11:17.468416] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:24.266 [2024-12-06 04:11:17.468477] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.266 [2024-12-06 04:11:17.468500] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:24.266 [2024-12-06 04:11:17.468508] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.266 [2024-12-06 04:11:17.470515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.266 [2024-12-06 04:11:17.470600] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:24.266 [2024-12-06 04:11:17.470670] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:24.266 [2024-12-06 04:11:17.470733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:24.266 [2024-12-06 04:11:17.470852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:24.266 spare 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.266 [2024-12-06 04:11:17.570757] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:24.266 [2024-12-06 04:11:17.570802] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:24.266 [2024-12-06 04:11:17.570935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:24.266 [2024-12-06 04:11:17.571041] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:24.266 [2024-12-06 04:11:17.571072] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:24.266 [2024-12-06 04:11:17.571174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.266 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.525 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.525 "name": "raid_bdev1", 00:20:24.525 "uuid": "febd8940-d513-4907-998e-0fa93c290af2", 00:20:24.525 "strip_size_kb": 0, 00:20:24.525 "state": "online", 00:20:24.525 "raid_level": "raid1", 00:20:24.525 "superblock": true, 00:20:24.526 "num_base_bdevs": 2, 00:20:24.526 "num_base_bdevs_discovered": 2, 00:20:24.526 "num_base_bdevs_operational": 2, 00:20:24.526 "base_bdevs_list": [ 00:20:24.526 { 00:20:24.526 "name": "spare", 00:20:24.526 "uuid": "0e3479a1-28ca-5885-9ab2-23862c267b3b", 00:20:24.526 "is_configured": true, 00:20:24.526 "data_offset": 256, 00:20:24.526 "data_size": 7936 00:20:24.526 }, 00:20:24.526 { 00:20:24.526 "name": "BaseBdev2", 00:20:24.526 "uuid": "58245152-ebf0-5c42-b18c-255f6951fe25", 00:20:24.526 "is_configured": true, 00:20:24.526 "data_offset": 256, 00:20:24.526 "data_size": 7936 00:20:24.526 } 00:20:24.526 ] 00:20:24.526 }' 00:20:24.526 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.526 04:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.786 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:24.786 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:24.786 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:24.786 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:24.786 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:24.786 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.786 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.786 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.786 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.786 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.786 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:24.786 "name": "raid_bdev1", 00:20:24.786 "uuid": "febd8940-d513-4907-998e-0fa93c290af2", 00:20:24.786 "strip_size_kb": 0, 00:20:24.786 "state": "online", 00:20:24.786 "raid_level": "raid1", 00:20:24.786 "superblock": true, 00:20:24.786 "num_base_bdevs": 2, 00:20:24.786 "num_base_bdevs_discovered": 2, 00:20:24.786 "num_base_bdevs_operational": 2, 00:20:24.786 "base_bdevs_list": [ 00:20:24.786 { 00:20:24.786 "name": "spare", 00:20:24.786 "uuid": "0e3479a1-28ca-5885-9ab2-23862c267b3b", 00:20:24.786 "is_configured": true, 00:20:24.786 "data_offset": 256, 00:20:24.786 "data_size": 7936 00:20:24.786 }, 00:20:24.786 { 00:20:24.786 "name": "BaseBdev2", 00:20:24.786 "uuid": "58245152-ebf0-5c42-b18c-255f6951fe25", 00:20:24.786 "is_configured": true, 00:20:24.786 "data_offset": 256, 00:20:24.786 "data_size": 7936 00:20:24.786 } 00:20:24.786 ] 00:20:24.786 }' 00:20:24.786 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:24.786 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.046 [2024-12-06 04:11:18.243202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.046 "name": "raid_bdev1", 00:20:25.046 "uuid": "febd8940-d513-4907-998e-0fa93c290af2", 00:20:25.046 "strip_size_kb": 0, 00:20:25.046 "state": "online", 00:20:25.046 "raid_level": "raid1", 00:20:25.046 "superblock": true, 00:20:25.046 "num_base_bdevs": 2, 00:20:25.046 "num_base_bdevs_discovered": 1, 00:20:25.046 "num_base_bdevs_operational": 1, 00:20:25.046 "base_bdevs_list": [ 00:20:25.046 { 00:20:25.046 "name": null, 00:20:25.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.046 "is_configured": false, 00:20:25.046 "data_offset": 0, 00:20:25.046 "data_size": 7936 00:20:25.046 }, 00:20:25.046 { 00:20:25.046 "name": "BaseBdev2", 00:20:25.046 "uuid": "58245152-ebf0-5c42-b18c-255f6951fe25", 00:20:25.046 "is_configured": true, 00:20:25.046 "data_offset": 256, 00:20:25.046 "data_size": 7936 00:20:25.046 } 00:20:25.046 ] 00:20:25.046 }' 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.046 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.615 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:25.615 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.615 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.615 [2024-12-06 04:11:18.698416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:25.615 [2024-12-06 04:11:18.698689] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:25.615 [2024-12-06 04:11:18.698757] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:25.615 [2024-12-06 04:11:18.698817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:25.615 [2024-12-06 04:11:18.714284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:25.615 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.615 [2024-12-06 04:11:18.716114] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:25.615 04:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:26.556 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:26.556 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:26.556 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:26.556 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:26.556 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:26.556 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.556 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.556 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.556 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:26.556 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.556 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:26.556 "name": "raid_bdev1", 00:20:26.556 "uuid": "febd8940-d513-4907-998e-0fa93c290af2", 00:20:26.556 "strip_size_kb": 0, 00:20:26.556 "state": "online", 00:20:26.556 "raid_level": "raid1", 00:20:26.556 "superblock": true, 00:20:26.556 "num_base_bdevs": 2, 00:20:26.556 "num_base_bdevs_discovered": 2, 00:20:26.556 "num_base_bdevs_operational": 2, 00:20:26.556 "process": { 00:20:26.556 "type": "rebuild", 00:20:26.556 "target": "spare", 00:20:26.556 "progress": { 00:20:26.556 "blocks": 2560, 00:20:26.556 "percent": 32 00:20:26.556 } 00:20:26.556 }, 00:20:26.556 "base_bdevs_list": [ 00:20:26.556 { 00:20:26.556 "name": "spare", 00:20:26.556 "uuid": "0e3479a1-28ca-5885-9ab2-23862c267b3b", 00:20:26.556 "is_configured": true, 00:20:26.556 "data_offset": 256, 00:20:26.556 "data_size": 7936 00:20:26.556 }, 00:20:26.556 { 00:20:26.556 "name": "BaseBdev2", 00:20:26.556 "uuid": "58245152-ebf0-5c42-b18c-255f6951fe25", 00:20:26.556 "is_configured": true, 00:20:26.556 "data_offset": 256, 00:20:26.556 "data_size": 7936 00:20:26.556 } 00:20:26.556 ] 00:20:26.556 }' 00:20:26.556 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:26.556 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:26.556 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:26.556 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:26.556 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:26.556 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.556 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:26.556 [2024-12-06 04:11:19.861263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:26.816 [2024-12-06 04:11:19.921667] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:26.816 [2024-12-06 04:11:19.921754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:26.816 [2024-12-06 04:11:19.921770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:26.816 [2024-12-06 04:11:19.921780] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:26.816 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.816 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:26.816 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:26.816 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:26.816 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:26.816 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:26.816 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:26.816 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.816 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.816 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.816 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.816 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.816 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.816 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.816 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:26.816 04:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.816 04:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.816 "name": "raid_bdev1", 00:20:26.816 "uuid": "febd8940-d513-4907-998e-0fa93c290af2", 00:20:26.816 "strip_size_kb": 0, 00:20:26.816 "state": "online", 00:20:26.816 "raid_level": "raid1", 00:20:26.816 "superblock": true, 00:20:26.816 "num_base_bdevs": 2, 00:20:26.816 "num_base_bdevs_discovered": 1, 00:20:26.816 "num_base_bdevs_operational": 1, 00:20:26.816 "base_bdevs_list": [ 00:20:26.816 { 00:20:26.816 "name": null, 00:20:26.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.816 "is_configured": false, 00:20:26.816 "data_offset": 0, 00:20:26.816 "data_size": 7936 00:20:26.816 }, 00:20:26.816 { 00:20:26.816 "name": "BaseBdev2", 00:20:26.816 "uuid": "58245152-ebf0-5c42-b18c-255f6951fe25", 00:20:26.816 "is_configured": true, 00:20:26.816 "data_offset": 256, 00:20:26.816 "data_size": 7936 00:20:26.816 } 00:20:26.816 ] 00:20:26.816 }' 00:20:26.816 04:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.816 04:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.076 04:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:27.076 04:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.076 04:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.076 [2024-12-06 04:11:20.385944] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:27.076 [2024-12-06 04:11:20.386108] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.076 [2024-12-06 04:11:20.386159] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:27.076 [2024-12-06 04:11:20.386209] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.076 [2024-12-06 04:11:20.386429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.076 [2024-12-06 04:11:20.386480] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:27.076 [2024-12-06 04:11:20.386565] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:27.076 [2024-12-06 04:11:20.386603] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:27.076 [2024-12-06 04:11:20.386642] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:27.076 [2024-12-06 04:11:20.386682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:27.076 [2024-12-06 04:11:20.402560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:27.076 spare 00:20:27.076 04:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.076 [2024-12-06 04:11:20.404457] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:27.076 04:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:28.460 "name": "raid_bdev1", 00:20:28.460 "uuid": "febd8940-d513-4907-998e-0fa93c290af2", 00:20:28.460 "strip_size_kb": 0, 00:20:28.460 "state": "online", 00:20:28.460 "raid_level": "raid1", 00:20:28.460 "superblock": true, 00:20:28.460 "num_base_bdevs": 2, 00:20:28.460 "num_base_bdevs_discovered": 2, 00:20:28.460 "num_base_bdevs_operational": 2, 00:20:28.460 "process": { 00:20:28.460 "type": "rebuild", 00:20:28.460 "target": "spare", 00:20:28.460 "progress": { 00:20:28.460 "blocks": 2560, 00:20:28.460 "percent": 32 00:20:28.460 } 00:20:28.460 }, 00:20:28.460 "base_bdevs_list": [ 00:20:28.460 { 00:20:28.460 "name": "spare", 00:20:28.460 "uuid": "0e3479a1-28ca-5885-9ab2-23862c267b3b", 00:20:28.460 "is_configured": true, 00:20:28.460 "data_offset": 256, 00:20:28.460 "data_size": 7936 00:20:28.460 }, 00:20:28.460 { 00:20:28.460 "name": "BaseBdev2", 00:20:28.460 "uuid": "58245152-ebf0-5c42-b18c-255f6951fe25", 00:20:28.460 "is_configured": true, 00:20:28.460 "data_offset": 256, 00:20:28.460 "data_size": 7936 00:20:28.460 } 00:20:28.460 ] 00:20:28.460 }' 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.460 [2024-12-06 04:11:21.548317] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:28.460 [2024-12-06 04:11:21.609933] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:28.460 [2024-12-06 04:11:21.610046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.460 [2024-12-06 04:11:21.610080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:28.460 [2024-12-06 04:11:21.610089] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.460 "name": "raid_bdev1", 00:20:28.460 "uuid": "febd8940-d513-4907-998e-0fa93c290af2", 00:20:28.460 "strip_size_kb": 0, 00:20:28.460 "state": "online", 00:20:28.460 "raid_level": "raid1", 00:20:28.460 "superblock": true, 00:20:28.460 "num_base_bdevs": 2, 00:20:28.460 "num_base_bdevs_discovered": 1, 00:20:28.460 "num_base_bdevs_operational": 1, 00:20:28.460 "base_bdevs_list": [ 00:20:28.460 { 00:20:28.460 "name": null, 00:20:28.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.460 "is_configured": false, 00:20:28.460 "data_offset": 0, 00:20:28.460 "data_size": 7936 00:20:28.460 }, 00:20:28.460 { 00:20:28.460 "name": "BaseBdev2", 00:20:28.460 "uuid": "58245152-ebf0-5c42-b18c-255f6951fe25", 00:20:28.460 "is_configured": true, 00:20:28.460 "data_offset": 256, 00:20:28.460 "data_size": 7936 00:20:28.460 } 00:20:28.460 ] 00:20:28.460 }' 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.460 04:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.030 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:29.030 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:29.030 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:29.030 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:29.030 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:29.030 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.030 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.030 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.030 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.030 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.030 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:29.030 "name": "raid_bdev1", 00:20:29.030 "uuid": "febd8940-d513-4907-998e-0fa93c290af2", 00:20:29.030 "strip_size_kb": 0, 00:20:29.030 "state": "online", 00:20:29.030 "raid_level": "raid1", 00:20:29.030 "superblock": true, 00:20:29.030 "num_base_bdevs": 2, 00:20:29.030 "num_base_bdevs_discovered": 1, 00:20:29.030 "num_base_bdevs_operational": 1, 00:20:29.030 "base_bdevs_list": [ 00:20:29.030 { 00:20:29.030 "name": null, 00:20:29.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.030 "is_configured": false, 00:20:29.030 "data_offset": 0, 00:20:29.030 "data_size": 7936 00:20:29.030 }, 00:20:29.030 { 00:20:29.030 "name": "BaseBdev2", 00:20:29.030 "uuid": "58245152-ebf0-5c42-b18c-255f6951fe25", 00:20:29.030 "is_configured": true, 00:20:29.030 "data_offset": 256, 00:20:29.030 "data_size": 7936 00:20:29.030 } 00:20:29.031 ] 00:20:29.031 }' 00:20:29.031 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:29.031 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:29.031 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:29.031 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:29.031 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:29.031 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.031 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.031 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.031 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:29.031 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.031 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.031 [2024-12-06 04:11:22.280456] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:29.031 [2024-12-06 04:11:22.280523] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:29.031 [2024-12-06 04:11:22.280548] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:29.031 [2024-12-06 04:11:22.280558] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:29.031 [2024-12-06 04:11:22.280764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:29.031 [2024-12-06 04:11:22.280782] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:29.031 [2024-12-06 04:11:22.280839] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:29.031 [2024-12-06 04:11:22.280853] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:29.031 [2024-12-06 04:11:22.280863] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:29.031 [2024-12-06 04:11:22.280875] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:29.031 BaseBdev1 00:20:29.031 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.031 04:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:29.970 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:29.970 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:29.970 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:29.970 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:29.970 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:29.970 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:29.970 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.970 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.970 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.970 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.970 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.970 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.970 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.970 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.970 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.230 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.230 "name": "raid_bdev1", 00:20:30.230 "uuid": "febd8940-d513-4907-998e-0fa93c290af2", 00:20:30.230 "strip_size_kb": 0, 00:20:30.230 "state": "online", 00:20:30.230 "raid_level": "raid1", 00:20:30.230 "superblock": true, 00:20:30.230 "num_base_bdevs": 2, 00:20:30.230 "num_base_bdevs_discovered": 1, 00:20:30.230 "num_base_bdevs_operational": 1, 00:20:30.230 "base_bdevs_list": [ 00:20:30.230 { 00:20:30.230 "name": null, 00:20:30.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.230 "is_configured": false, 00:20:30.230 "data_offset": 0, 00:20:30.230 "data_size": 7936 00:20:30.230 }, 00:20:30.230 { 00:20:30.230 "name": "BaseBdev2", 00:20:30.230 "uuid": "58245152-ebf0-5c42-b18c-255f6951fe25", 00:20:30.230 "is_configured": true, 00:20:30.230 "data_offset": 256, 00:20:30.230 "data_size": 7936 00:20:30.230 } 00:20:30.230 ] 00:20:30.230 }' 00:20:30.230 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.230 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.490 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:30.490 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:30.490 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:30.490 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:30.490 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:30.490 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.490 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.490 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.490 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.490 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.490 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:30.490 "name": "raid_bdev1", 00:20:30.490 "uuid": "febd8940-d513-4907-998e-0fa93c290af2", 00:20:30.490 "strip_size_kb": 0, 00:20:30.490 "state": "online", 00:20:30.490 "raid_level": "raid1", 00:20:30.490 "superblock": true, 00:20:30.490 "num_base_bdevs": 2, 00:20:30.490 "num_base_bdevs_discovered": 1, 00:20:30.490 "num_base_bdevs_operational": 1, 00:20:30.490 "base_bdevs_list": [ 00:20:30.490 { 00:20:30.490 "name": null, 00:20:30.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.490 "is_configured": false, 00:20:30.490 "data_offset": 0, 00:20:30.490 "data_size": 7936 00:20:30.490 }, 00:20:30.490 { 00:20:30.490 "name": "BaseBdev2", 00:20:30.490 "uuid": "58245152-ebf0-5c42-b18c-255f6951fe25", 00:20:30.490 "is_configured": true, 00:20:30.490 "data_offset": 256, 00:20:30.490 "data_size": 7936 00:20:30.490 } 00:20:30.490 ] 00:20:30.490 }' 00:20:30.490 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:30.490 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:30.490 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:30.490 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:30.490 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:30.490 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:20:30.490 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:30.490 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:30.750 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.750 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:30.750 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.750 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:30.750 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.750 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.750 [2024-12-06 04:11:23.853859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:30.750 [2024-12-06 04:11:23.854018] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:30.750 [2024-12-06 04:11:23.854036] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:30.750 request: 00:20:30.750 { 00:20:30.750 "base_bdev": "BaseBdev1", 00:20:30.750 "raid_bdev": "raid_bdev1", 00:20:30.750 "method": "bdev_raid_add_base_bdev", 00:20:30.750 "req_id": 1 00:20:30.750 } 00:20:30.750 Got JSON-RPC error response 00:20:30.750 response: 00:20:30.750 { 00:20:30.750 "code": -22, 00:20:30.750 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:30.750 } 00:20:30.750 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:30.750 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:20:30.750 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:30.750 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:30.750 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:30.750 04:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:31.689 04:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:31.689 04:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:31.689 04:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:31.689 04:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:31.689 04:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:31.689 04:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:31.689 04:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:31.689 04:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:31.689 04:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:31.689 04:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:31.689 04:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.689 04:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.689 04:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.689 04:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.689 04:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.689 04:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:31.689 "name": "raid_bdev1", 00:20:31.689 "uuid": "febd8940-d513-4907-998e-0fa93c290af2", 00:20:31.689 "strip_size_kb": 0, 00:20:31.689 "state": "online", 00:20:31.689 "raid_level": "raid1", 00:20:31.689 "superblock": true, 00:20:31.689 "num_base_bdevs": 2, 00:20:31.689 "num_base_bdevs_discovered": 1, 00:20:31.689 "num_base_bdevs_operational": 1, 00:20:31.689 "base_bdevs_list": [ 00:20:31.689 { 00:20:31.689 "name": null, 00:20:31.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.689 "is_configured": false, 00:20:31.689 "data_offset": 0, 00:20:31.689 "data_size": 7936 00:20:31.689 }, 00:20:31.689 { 00:20:31.689 "name": "BaseBdev2", 00:20:31.689 "uuid": "58245152-ebf0-5c42-b18c-255f6951fe25", 00:20:31.689 "is_configured": true, 00:20:31.689 "data_offset": 256, 00:20:31.689 "data_size": 7936 00:20:31.689 } 00:20:31.689 ] 00:20:31.689 }' 00:20:31.689 04:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:31.689 04:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.949 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:31.949 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:31.949 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:31.949 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:31.949 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.214 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.214 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.214 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.214 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:32.214 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.214 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.214 "name": "raid_bdev1", 00:20:32.214 "uuid": "febd8940-d513-4907-998e-0fa93c290af2", 00:20:32.214 "strip_size_kb": 0, 00:20:32.214 "state": "online", 00:20:32.214 "raid_level": "raid1", 00:20:32.214 "superblock": true, 00:20:32.214 "num_base_bdevs": 2, 00:20:32.214 "num_base_bdevs_discovered": 1, 00:20:32.214 "num_base_bdevs_operational": 1, 00:20:32.214 "base_bdevs_list": [ 00:20:32.214 { 00:20:32.214 "name": null, 00:20:32.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.214 "is_configured": false, 00:20:32.214 "data_offset": 0, 00:20:32.214 "data_size": 7936 00:20:32.214 }, 00:20:32.214 { 00:20:32.214 "name": "BaseBdev2", 00:20:32.214 "uuid": "58245152-ebf0-5c42-b18c-255f6951fe25", 00:20:32.214 "is_configured": true, 00:20:32.214 "data_offset": 256, 00:20:32.214 "data_size": 7936 00:20:32.214 } 00:20:32.214 ] 00:20:32.214 }' 00:20:32.214 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.214 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:32.214 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.214 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:32.214 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89298 00:20:32.214 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89298 ']' 00:20:32.214 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89298 00:20:32.214 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:32.214 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.214 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89298 00:20:32.214 killing process with pid 89298 00:20:32.214 Received shutdown signal, test time was about 60.000000 seconds 00:20:32.214 00:20:32.214 Latency(us) 00:20:32.214 [2024-12-06T04:11:25.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.214 [2024-12-06T04:11:25.568Z] =================================================================================================================== 00:20:32.214 [2024-12-06T04:11:25.568Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:32.214 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:32.214 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:32.214 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89298' 00:20:32.214 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89298 00:20:32.214 [2024-12-06 04:11:25.453542] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:32.214 [2024-12-06 04:11:25.453674] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:32.214 04:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89298 00:20:32.214 [2024-12-06 04:11:25.453723] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:32.214 [2024-12-06 04:11:25.453735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:32.473 [2024-12-06 04:11:25.754389] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:33.854 04:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:20:33.854 00:20:33.854 real 0m17.522s 00:20:33.854 user 0m22.962s 00:20:33.854 sys 0m1.655s 00:20:33.854 ************************************ 00:20:33.854 END TEST raid_rebuild_test_sb_md_interleaved 00:20:33.854 ************************************ 00:20:33.854 04:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:33.854 04:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:33.854 04:11:26 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:20:33.854 04:11:26 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:20:33.854 04:11:26 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89298 ']' 00:20:33.854 04:11:26 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89298 00:20:33.854 04:11:26 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:20:33.854 00:20:33.854 real 12m14.282s 00:20:33.854 user 16m29.997s 00:20:33.854 sys 1m51.435s 00:20:33.854 ************************************ 00:20:33.854 END TEST bdev_raid 00:20:33.854 04:11:26 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:33.854 04:11:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:33.854 ************************************ 00:20:33.854 04:11:26 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:33.854 04:11:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:33.854 04:11:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:33.854 04:11:26 -- common/autotest_common.sh@10 -- # set +x 00:20:33.854 ************************************ 00:20:33.854 START TEST spdkcli_raid 00:20:33.854 ************************************ 00:20:33.854 04:11:27 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:33.854 * Looking for test storage... 00:20:33.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:33.854 04:11:27 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:33.854 04:11:27 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:20:33.854 04:11:27 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:33.854 04:11:27 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:33.854 04:11:27 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:33.854 04:11:27 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:33.854 04:11:27 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:33.854 04:11:27 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:20:33.854 04:11:27 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:20:33.854 04:11:27 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:20:33.854 04:11:27 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:20:33.854 04:11:27 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:20:33.854 04:11:27 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:20:33.854 04:11:27 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:20:33.854 04:11:27 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:33.854 04:11:27 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:20:33.854 04:11:27 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:20:33.854 04:11:27 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:33.854 04:11:27 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:33.854 04:11:27 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:20:33.854 04:11:27 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:20:33.854 04:11:27 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:33.854 04:11:27 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:20:34.115 04:11:27 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:34.115 04:11:27 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:20:34.115 04:11:27 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:20:34.115 04:11:27 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:34.115 04:11:27 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:20:34.115 04:11:27 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:34.115 04:11:27 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:34.115 04:11:27 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:34.115 04:11:27 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:20:34.115 04:11:27 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:34.115 04:11:27 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:34.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.115 --rc genhtml_branch_coverage=1 00:20:34.115 --rc genhtml_function_coverage=1 00:20:34.115 --rc genhtml_legend=1 00:20:34.115 --rc geninfo_all_blocks=1 00:20:34.115 --rc geninfo_unexecuted_blocks=1 00:20:34.115 00:20:34.115 ' 00:20:34.115 04:11:27 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:34.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.115 --rc genhtml_branch_coverage=1 00:20:34.115 --rc genhtml_function_coverage=1 00:20:34.115 --rc genhtml_legend=1 00:20:34.115 --rc geninfo_all_blocks=1 00:20:34.115 --rc geninfo_unexecuted_blocks=1 00:20:34.115 00:20:34.115 ' 00:20:34.115 04:11:27 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:34.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.115 --rc genhtml_branch_coverage=1 00:20:34.115 --rc genhtml_function_coverage=1 00:20:34.115 --rc genhtml_legend=1 00:20:34.115 --rc geninfo_all_blocks=1 00:20:34.115 --rc geninfo_unexecuted_blocks=1 00:20:34.115 00:20:34.115 ' 00:20:34.115 04:11:27 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:34.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.115 --rc genhtml_branch_coverage=1 00:20:34.115 --rc genhtml_function_coverage=1 00:20:34.115 --rc genhtml_legend=1 00:20:34.115 --rc geninfo_all_blocks=1 00:20:34.115 --rc geninfo_unexecuted_blocks=1 00:20:34.115 00:20:34.115 ' 00:20:34.115 04:11:27 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:34.115 04:11:27 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:34.115 04:11:27 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:34.115 04:11:27 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:34.115 04:11:27 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:34.115 04:11:27 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:34.115 04:11:27 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:34.115 04:11:27 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:34.115 04:11:27 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:34.115 04:11:27 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:34.115 04:11:27 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:34.115 04:11:27 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:34.115 04:11:27 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:34.115 04:11:27 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:34.115 04:11:27 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:34.115 04:11:27 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:34.115 04:11:27 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:34.115 04:11:27 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:34.115 04:11:27 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:34.115 04:11:27 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:34.115 04:11:27 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:34.115 04:11:27 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:34.115 04:11:27 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:34.115 04:11:27 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:20:34.115 04:11:27 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:20:34.115 04:11:27 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:34.115 04:11:27 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:34.115 04:11:27 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:34.115 04:11:27 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:34.115 04:11:27 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:34.115 04:11:27 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:34.115 04:11:27 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:20:34.115 04:11:27 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:20:34.115 04:11:27 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:34.115 04:11:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:34.115 04:11:27 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:20:34.115 04:11:27 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89974 00:20:34.115 04:11:27 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:20:34.115 04:11:27 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89974 00:20:34.115 04:11:27 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89974 ']' 00:20:34.115 04:11:27 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.115 04:11:27 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.115 04:11:27 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.115 04:11:27 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.115 04:11:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:34.115 [2024-12-06 04:11:27.350884] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:20:34.115 [2024-12-06 04:11:27.351007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89974 ] 00:20:34.375 [2024-12-06 04:11:27.527290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:34.375 [2024-12-06 04:11:27.642457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.375 [2024-12-06 04:11:27.642495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.315 04:11:28 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.315 04:11:28 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:20:35.315 04:11:28 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:20:35.315 04:11:28 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:35.315 04:11:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:35.315 04:11:28 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:20:35.315 04:11:28 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:35.315 04:11:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:35.315 04:11:28 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:20:35.315 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:20:35.315 ' 00:20:37.223 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:20:37.223 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:20:37.223 04:11:30 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:20:37.223 04:11:30 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:37.223 04:11:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:37.223 04:11:30 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:20:37.223 04:11:30 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:37.223 04:11:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:37.223 04:11:30 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:20:37.223 ' 00:20:38.179 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:20:38.179 04:11:31 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:20:38.179 04:11:31 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:38.179 04:11:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:38.179 04:11:31 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:20:38.179 04:11:31 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:38.179 04:11:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:38.179 04:11:31 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:20:38.179 04:11:31 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:20:38.747 04:11:32 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:20:38.747 04:11:32 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:20:38.747 04:11:32 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:20:38.747 04:11:32 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:38.747 04:11:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:39.007 04:11:32 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:20:39.007 04:11:32 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:39.007 04:11:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:39.007 04:11:32 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:20:39.007 ' 00:20:39.945 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:20:39.945 04:11:33 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:20:39.945 04:11:33 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:39.945 04:11:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:39.945 04:11:33 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:20:39.945 04:11:33 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:39.945 04:11:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:39.945 04:11:33 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:20:39.945 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:20:39.945 ' 00:20:41.324 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:20:41.324 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:20:41.584 04:11:34 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:20:41.584 04:11:34 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:41.584 04:11:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:41.584 04:11:34 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89974 00:20:41.584 04:11:34 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89974 ']' 00:20:41.584 04:11:34 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89974 00:20:41.584 04:11:34 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:20:41.584 04:11:34 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.584 04:11:34 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89974 00:20:41.584 04:11:34 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:41.584 04:11:34 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:41.584 04:11:34 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89974' 00:20:41.584 killing process with pid 89974 00:20:41.584 04:11:34 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89974 00:20:41.584 04:11:34 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89974 00:20:44.121 04:11:37 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:20:44.121 04:11:37 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89974 ']' 00:20:44.121 04:11:37 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89974 00:20:44.121 04:11:37 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89974 ']' 00:20:44.121 04:11:37 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89974 00:20:44.121 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89974) - No such process 00:20:44.121 04:11:37 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89974 is not found' 00:20:44.121 Process with pid 89974 is not found 00:20:44.121 04:11:37 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:20:44.121 04:11:37 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:20:44.121 04:11:37 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:20:44.121 04:11:37 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:20:44.121 00:20:44.121 real 0m10.238s 00:20:44.121 user 0m21.170s 00:20:44.121 sys 0m1.127s 00:20:44.121 04:11:37 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:44.121 04:11:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:44.121 ************************************ 00:20:44.121 END TEST spdkcli_raid 00:20:44.121 ************************************ 00:20:44.121 04:11:37 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:44.121 04:11:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:44.121 04:11:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:44.121 04:11:37 -- common/autotest_common.sh@10 -- # set +x 00:20:44.121 ************************************ 00:20:44.121 START TEST blockdev_raid5f 00:20:44.121 ************************************ 00:20:44.121 04:11:37 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:44.121 * Looking for test storage... 00:20:44.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:44.121 04:11:37 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:44.121 04:11:37 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:20:44.121 04:11:37 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:44.381 04:11:37 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:44.381 04:11:37 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:20:44.381 04:11:37 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:44.381 04:11:37 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:44.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.381 --rc genhtml_branch_coverage=1 00:20:44.381 --rc genhtml_function_coverage=1 00:20:44.381 --rc genhtml_legend=1 00:20:44.381 --rc geninfo_all_blocks=1 00:20:44.381 --rc geninfo_unexecuted_blocks=1 00:20:44.381 00:20:44.381 ' 00:20:44.381 04:11:37 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:44.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.381 --rc genhtml_branch_coverage=1 00:20:44.381 --rc genhtml_function_coverage=1 00:20:44.381 --rc genhtml_legend=1 00:20:44.381 --rc geninfo_all_blocks=1 00:20:44.381 --rc geninfo_unexecuted_blocks=1 00:20:44.381 00:20:44.381 ' 00:20:44.381 04:11:37 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:44.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.381 --rc genhtml_branch_coverage=1 00:20:44.381 --rc genhtml_function_coverage=1 00:20:44.381 --rc genhtml_legend=1 00:20:44.381 --rc geninfo_all_blocks=1 00:20:44.381 --rc geninfo_unexecuted_blocks=1 00:20:44.381 00:20:44.381 ' 00:20:44.381 04:11:37 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:44.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.381 --rc genhtml_branch_coverage=1 00:20:44.381 --rc genhtml_function_coverage=1 00:20:44.381 --rc genhtml_legend=1 00:20:44.381 --rc geninfo_all_blocks=1 00:20:44.381 --rc geninfo_unexecuted_blocks=1 00:20:44.381 00:20:44.381 ' 00:20:44.381 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:44.381 04:11:37 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:20:44.381 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:44.381 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:44.381 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:44.381 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:44.381 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:44.381 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:44.381 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:20:44.381 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:20:44.381 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:20:44.381 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:20:44.381 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:20:44.381 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:20:44.381 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:20:44.381 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:20:44.381 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:20:44.382 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:20:44.382 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:20:44.382 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:20:44.382 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:20:44.382 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:20:44.382 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:20:44.382 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:20:44.382 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90254 00:20:44.382 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:44.382 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:44.382 04:11:37 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90254 00:20:44.382 04:11:37 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90254 ']' 00:20:44.382 04:11:37 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.382 04:11:37 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.382 04:11:37 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.382 04:11:37 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.382 04:11:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:44.382 [2024-12-06 04:11:37.645777] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:20:44.382 [2024-12-06 04:11:37.645984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90254 ] 00:20:44.641 [2024-12-06 04:11:37.820436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.641 [2024-12-06 04:11:37.934109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.579 04:11:38 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.579 04:11:38 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:20:45.579 04:11:38 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:20:45.579 04:11:38 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:20:45.579 04:11:38 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:20:45.579 04:11:38 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.579 04:11:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:45.579 Malloc0 00:20:45.579 Malloc1 00:20:45.579 Malloc2 00:20:45.579 04:11:38 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.579 04:11:38 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:20:45.579 04:11:38 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.579 04:11:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:45.839 04:11:38 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.839 04:11:38 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:20:45.839 04:11:38 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:20:45.839 04:11:38 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.839 04:11:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:45.839 04:11:38 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.839 04:11:38 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:20:45.839 04:11:38 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.839 04:11:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:45.839 04:11:38 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.839 04:11:38 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:45.839 04:11:38 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.839 04:11:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:45.839 04:11:39 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.839 04:11:39 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:20:45.839 04:11:39 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:20:45.839 04:11:39 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:20:45.839 04:11:39 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.839 04:11:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:45.839 04:11:39 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.839 04:11:39 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:20:45.839 04:11:39 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:20:45.839 04:11:39 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "22595baf-b11c-4048-9efe-70f9261ae7fd"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "22595baf-b11c-4048-9efe-70f9261ae7fd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "22595baf-b11c-4048-9efe-70f9261ae7fd",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "f76a54f0-2b1b-41b6-8de4-2f65cd3043ea",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "59ea4b71-fdfc-44e4-9047-5d27fb304c08",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "6d00c3be-988f-4c59-bef6-77479ccc4957",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:45.839 04:11:39 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:20:45.839 04:11:39 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:20:45.839 04:11:39 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:20:45.839 04:11:39 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90254 00:20:45.839 04:11:39 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90254 ']' 00:20:45.839 04:11:39 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90254 00:20:45.839 04:11:39 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:20:45.839 04:11:39 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.839 04:11:39 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90254 00:20:45.839 killing process with pid 90254 00:20:45.839 04:11:39 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:45.839 04:11:39 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:45.839 04:11:39 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90254' 00:20:45.839 04:11:39 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90254 00:20:45.839 04:11:39 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90254 00:20:49.132 04:11:41 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:49.132 04:11:41 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:49.132 04:11:41 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:49.132 04:11:41 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:49.132 04:11:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:49.132 ************************************ 00:20:49.132 START TEST bdev_hello_world 00:20:49.132 ************************************ 00:20:49.132 04:11:41 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:49.132 [2024-12-06 04:11:41.900803] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:20:49.132 [2024-12-06 04:11:41.900925] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90322 ] 00:20:49.132 [2024-12-06 04:11:42.057313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.132 [2024-12-06 04:11:42.172479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.392 [2024-12-06 04:11:42.691393] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:49.392 [2024-12-06 04:11:42.691549] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:20:49.392 [2024-12-06 04:11:42.691589] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:49.392 [2024-12-06 04:11:42.692142] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:49.392 [2024-12-06 04:11:42.692325] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:49.392 [2024-12-06 04:11:42.692344] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:49.392 [2024-12-06 04:11:42.692395] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:49.392 00:20:49.392 [2024-12-06 04:11:42.692414] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:50.833 00:20:50.833 real 0m2.276s 00:20:50.833 user 0m1.918s 00:20:50.833 sys 0m0.237s 00:20:50.833 ************************************ 00:20:50.833 END TEST bdev_hello_world 00:20:50.833 ************************************ 00:20:50.833 04:11:44 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.833 04:11:44 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:50.833 04:11:44 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:20:50.833 04:11:44 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:50.833 04:11:44 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:50.833 04:11:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:50.833 ************************************ 00:20:50.833 START TEST bdev_bounds 00:20:50.833 ************************************ 00:20:50.833 04:11:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:20:50.833 04:11:44 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90369 00:20:50.833 04:11:44 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:50.833 04:11:44 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:50.833 04:11:44 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90369' 00:20:50.833 Process bdevio pid: 90369 00:20:50.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.833 04:11:44 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90369 00:20:50.833 04:11:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90369 ']' 00:20:50.833 04:11:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.833 04:11:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.833 04:11:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.833 04:11:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.833 04:11:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:51.093 [2024-12-06 04:11:44.242226] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:20:51.093 [2024-12-06 04:11:44.242430] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90369 ] 00:20:51.093 [2024-12-06 04:11:44.417160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:51.352 [2024-12-06 04:11:44.532412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.352 [2024-12-06 04:11:44.532551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.352 [2024-12-06 04:11:44.532590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.921 04:11:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.921 04:11:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:20:51.921 04:11:45 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:51.921 I/O targets: 00:20:51.921 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:20:51.921 00:20:51.921 00:20:51.921 CUnit - A unit testing framework for C - Version 2.1-3 00:20:51.921 http://cunit.sourceforge.net/ 00:20:51.921 00:20:51.921 00:20:51.921 Suite: bdevio tests on: raid5f 00:20:51.921 Test: blockdev write read block ...passed 00:20:51.921 Test: blockdev write zeroes read block ...passed 00:20:51.921 Test: blockdev write zeroes read no split ...passed 00:20:52.178 Test: blockdev write zeroes read split ...passed 00:20:52.179 Test: blockdev write zeroes read split partial ...passed 00:20:52.179 Test: blockdev reset ...passed 00:20:52.179 Test: blockdev write read 8 blocks ...passed 00:20:52.179 Test: blockdev write read size > 128k ...passed 00:20:52.179 Test: blockdev write read invalid size ...passed 00:20:52.179 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:52.179 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:52.179 Test: blockdev write read max offset ...passed 00:20:52.179 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:52.179 Test: blockdev writev readv 8 blocks ...passed 00:20:52.179 Test: blockdev writev readv 30 x 1block ...passed 00:20:52.179 Test: blockdev writev readv block ...passed 00:20:52.179 Test: blockdev writev readv size > 128k ...passed 00:20:52.179 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:52.179 Test: blockdev comparev and writev ...passed 00:20:52.179 Test: blockdev nvme passthru rw ...passed 00:20:52.179 Test: blockdev nvme passthru vendor specific ...passed 00:20:52.179 Test: blockdev nvme admin passthru ...passed 00:20:52.179 Test: blockdev copy ...passed 00:20:52.179 00:20:52.179 Run Summary: Type Total Ran Passed Failed Inactive 00:20:52.179 suites 1 1 n/a 0 0 00:20:52.179 tests 23 23 23 0 0 00:20:52.179 asserts 130 130 130 0 n/a 00:20:52.179 00:20:52.179 Elapsed time = 0.600 seconds 00:20:52.179 0 00:20:52.179 04:11:45 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90369 00:20:52.179 04:11:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90369 ']' 00:20:52.179 04:11:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90369 00:20:52.179 04:11:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:20:52.179 04:11:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:52.179 04:11:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90369 00:20:52.179 04:11:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:52.179 04:11:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:52.179 04:11:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90369' 00:20:52.179 killing process with pid 90369 00:20:52.179 04:11:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90369 00:20:52.179 04:11:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90369 00:20:53.555 04:11:46 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:53.555 00:20:53.555 real 0m2.740s 00:20:53.555 user 0m6.854s 00:20:53.555 sys 0m0.364s 00:20:53.555 04:11:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:53.555 04:11:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:53.555 ************************************ 00:20:53.555 END TEST bdev_bounds 00:20:53.555 ************************************ 00:20:53.815 04:11:46 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:53.815 04:11:46 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:53.815 04:11:46 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:53.815 04:11:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:53.815 ************************************ 00:20:53.815 START TEST bdev_nbd 00:20:53.815 ************************************ 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90428 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90428 /var/tmp/spdk-nbd.sock 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90428 ']' 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.815 04:11:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:53.815 [2024-12-06 04:11:47.059027] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:20:53.815 [2024-12-06 04:11:47.059173] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.074 [2024-12-06 04:11:47.218638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.074 [2024-12-06 04:11:47.330684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.642 04:11:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.642 04:11:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:20:54.642 04:11:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:20:54.642 04:11:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:54.642 04:11:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:20:54.642 04:11:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:54.642 04:11:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:20:54.642 04:11:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:54.642 04:11:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:20:54.642 04:11:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:54.642 04:11:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:54.642 04:11:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:54.642 04:11:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:54.642 04:11:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:54.642 04:11:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:20:54.900 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:54.900 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:54.901 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:54.901 04:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:54.901 04:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:54.901 04:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:54.901 04:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:54.901 04:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:54.901 04:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:54.901 04:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:54.901 04:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:54.901 04:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:54.901 1+0 records in 00:20:54.901 1+0 records out 00:20:54.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485656 s, 8.4 MB/s 00:20:54.901 04:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:54.901 04:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:54.901 04:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:54.901 04:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:54.901 04:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:54.901 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:54.901 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:54.901 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:55.159 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:55.160 { 00:20:55.160 "nbd_device": "/dev/nbd0", 00:20:55.160 "bdev_name": "raid5f" 00:20:55.160 } 00:20:55.160 ]' 00:20:55.160 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:55.160 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:55.160 { 00:20:55.160 "nbd_device": "/dev/nbd0", 00:20:55.160 "bdev_name": "raid5f" 00:20:55.160 } 00:20:55.160 ]' 00:20:55.160 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:55.160 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:55.160 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:55.160 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:55.160 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:55.160 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:55.160 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:55.160 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:55.419 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:55.419 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:55.419 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:55.419 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:55.419 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:55.419 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:55.419 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:55.419 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:55.419 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:55.419 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:55.419 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:55.679 04:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:20:55.938 /dev/nbd0 00:20:55.938 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:55.938 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:55.938 04:11:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:55.938 04:11:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:55.938 04:11:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:55.939 04:11:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:55.939 04:11:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:55.939 04:11:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:55.939 04:11:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:55.939 04:11:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:55.939 04:11:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:55.939 1+0 records in 00:20:55.939 1+0 records out 00:20:55.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562195 s, 7.3 MB/s 00:20:55.939 04:11:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:55.939 04:11:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:55.939 04:11:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:55.939 04:11:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:55.939 04:11:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:55.939 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:55.939 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:55.939 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:55.939 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:55.939 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:56.198 { 00:20:56.198 "nbd_device": "/dev/nbd0", 00:20:56.198 "bdev_name": "raid5f" 00:20:56.198 } 00:20:56.198 ]' 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:56.198 { 00:20:56.198 "nbd_device": "/dev/nbd0", 00:20:56.198 "bdev_name": "raid5f" 00:20:56.198 } 00:20:56.198 ]' 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:56.198 256+0 records in 00:20:56.198 256+0 records out 00:20:56.198 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142874 s, 73.4 MB/s 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:56.198 256+0 records in 00:20:56.198 256+0 records out 00:20:56.198 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0336068 s, 31.2 MB/s 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:56.198 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:56.199 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:56.199 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:56.199 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:56.199 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:56.199 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:56.457 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:56.457 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:56.457 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:56.457 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:56.457 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:56.457 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:56.457 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:56.457 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:56.457 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:56.457 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:56.457 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:56.716 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:56.716 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:56.716 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:56.716 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:56.716 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:56.716 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:56.716 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:56.716 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:56.716 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:56.716 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:56.716 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:56.716 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:56.716 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:56.716 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:56.716 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:56.716 04:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:56.976 malloc_lvol_verify 00:20:56.976 04:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:57.235 1d3c42fe-5f99-4772-9286-a095610b8e56 00:20:57.235 04:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:57.495 427ba311-0944-4e80-9125-e4915afd1528 00:20:57.495 04:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:57.495 /dev/nbd0 00:20:57.495 04:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:57.495 04:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:57.495 04:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:57.495 04:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:57.495 04:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:57.495 mke2fs 1.47.0 (5-Feb-2023) 00:20:57.495 Discarding device blocks: 0/4096 done 00:20:57.495 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:57.495 00:20:57.495 Allocating group tables: 0/1 done 00:20:57.495 Writing inode tables: 0/1 done 00:20:57.495 Creating journal (1024 blocks): done 00:20:57.495 Writing superblocks and filesystem accounting information: 0/1 done 00:20:57.495 00:20:57.495 04:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:57.754 04:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:57.754 04:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:57.754 04:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:57.754 04:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:57.754 04:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:57.754 04:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:57.754 04:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:57.754 04:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:57.754 04:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:57.754 04:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:57.754 04:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:57.754 04:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:57.754 04:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:57.754 04:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:57.754 04:11:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90428 00:20:57.754 04:11:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90428 ']' 00:20:57.754 04:11:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90428 00:20:57.754 04:11:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:20:57.754 04:11:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.754 04:11:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90428 00:20:58.014 04:11:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:58.014 04:11:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:58.014 killing process with pid 90428 00:20:58.014 04:11:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90428' 00:20:58.014 04:11:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90428 00:20:58.014 04:11:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90428 00:20:59.390 04:11:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:59.390 00:20:59.390 real 0m5.623s 00:20:59.391 user 0m7.623s 00:20:59.391 sys 0m1.253s 00:20:59.391 04:11:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:59.391 04:11:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:59.391 ************************************ 00:20:59.391 END TEST bdev_nbd 00:20:59.391 ************************************ 00:20:59.391 04:11:52 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:20:59.391 04:11:52 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:20:59.391 04:11:52 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:20:59.391 04:11:52 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:20:59.391 04:11:52 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:59.391 04:11:52 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:59.391 04:11:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:59.391 ************************************ 00:20:59.391 START TEST bdev_fio 00:20:59.391 ************************************ 00:20:59.391 04:11:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:20:59.391 04:11:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:20:59.391 04:11:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:20:59.391 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:20:59.391 04:11:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:20:59.391 04:11:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:20:59.391 04:11:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:20:59.391 04:11:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:20:59.391 04:11:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:20:59.391 04:11:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:59.391 04:11:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:20:59.391 04:11:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:20:59.391 04:11:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:59.391 04:11:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:59.391 04:11:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:59.391 04:11:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:20:59.391 04:11:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:59.391 04:11:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:59.391 04:11:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:59.391 04:11:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:20:59.391 04:11:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:20:59.391 04:11:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:20:59.391 04:11:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:20:59.649 04:11:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:20:59.649 04:11:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:20:59.649 04:11:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:59.649 04:11:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:20:59.649 04:11:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:20:59.649 04:11:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:20:59.650 04:11:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:59.650 04:11:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:20:59.650 04:11:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:59.650 04:11:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:59.650 ************************************ 00:20:59.650 START TEST bdev_fio_rw_verify 00:20:59.650 ************************************ 00:20:59.650 04:11:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:59.650 04:11:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:59.650 04:11:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:59.650 04:11:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:59.650 04:11:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:59.650 04:11:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:59.650 04:11:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:20:59.650 04:11:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:59.650 04:11:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:59.650 04:11:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:20:59.650 04:11:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:59.650 04:11:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:59.650 04:11:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:59.650 04:11:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:59.650 04:11:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:20:59.650 04:11:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:59.650 04:11:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:59.910 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:59.910 fio-3.35 00:20:59.910 Starting 1 thread 00:21:12.109 00:21:12.109 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90626: Fri Dec 6 04:12:03 2024 00:21:12.109 read: IOPS=10.6k, BW=41.6MiB/s (43.6MB/s)(416MiB/10001msec) 00:21:12.109 slat (nsec): min=18952, max=80684, avg=22816.16, stdev=2963.38 00:21:12.109 clat (usec): min=10, max=413, avg=148.62, stdev=55.64 00:21:12.109 lat (usec): min=31, max=448, avg=171.44, stdev=56.34 00:21:12.109 clat percentiles (usec): 00:21:12.109 | 50.000th=[ 149], 99.000th=[ 265], 99.900th=[ 310], 99.990th=[ 363], 00:21:12.109 | 99.999th=[ 396] 00:21:12.109 write: IOPS=11.1k, BW=43.6MiB/s (45.7MB/s)(430MiB/9877msec); 0 zone resets 00:21:12.109 slat (usec): min=8, max=239, avg=19.15, stdev= 4.80 00:21:12.109 clat (usec): min=61, max=1475, avg=344.30, stdev=59.23 00:21:12.109 lat (usec): min=77, max=1714, avg=363.45, stdev=61.60 00:21:12.109 clat percentiles (usec): 00:21:12.109 | 50.000th=[ 343], 99.000th=[ 519], 99.900th=[ 635], 99.990th=[ 1188], 00:21:12.109 | 99.999th=[ 1418] 00:21:12.109 bw ( KiB/s): min=39736, max=49128, per=98.72%, avg=44025.68, stdev=2415.94, samples=19 00:21:12.109 iops : min= 9934, max=12282, avg=11006.42, stdev=603.98, samples=19 00:21:12.109 lat (usec) : 20=0.01%, 50=0.01%, 100=11.64%, 250=37.53%, 500=50.16% 00:21:12.109 lat (usec) : 750=0.64%, 1000=0.02% 00:21:12.109 lat (msec) : 2=0.01% 00:21:12.109 cpu : usr=99.01%, sys=0.30%, ctx=23, majf=0, minf=8899 00:21:12.109 IO depths : 1=7.7%, 2=20.0%, 4=55.0%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:12.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.109 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.109 issued rwts: total=106508,110123,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.109 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:12.109 00:21:12.109 Run status group 0 (all jobs): 00:21:12.109 READ: bw=41.6MiB/s (43.6MB/s), 41.6MiB/s-41.6MiB/s (43.6MB/s-43.6MB/s), io=416MiB (436MB), run=10001-10001msec 00:21:12.109 WRITE: bw=43.6MiB/s (45.7MB/s), 43.6MiB/s-43.6MiB/s (45.7MB/s-45.7MB/s), io=430MiB (451MB), run=9877-9877msec 00:21:12.368 ----------------------------------------------------- 00:21:12.368 Suppressions used: 00:21:12.368 count bytes template 00:21:12.368 1 7 /usr/src/fio/parse.c 00:21:12.368 312 29952 /usr/src/fio/iolog.c 00:21:12.368 1 8 libtcmalloc_minimal.so 00:21:12.368 1 904 libcrypto.so 00:21:12.368 ----------------------------------------------------- 00:21:12.368 00:21:12.368 00:21:12.368 real 0m12.829s 00:21:12.368 user 0m13.003s 00:21:12.368 sys 0m0.656s 00:21:12.368 04:12:05 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:12.368 04:12:05 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:21:12.368 ************************************ 00:21:12.368 END TEST bdev_fio_rw_verify 00:21:12.368 ************************************ 00:21:12.368 04:12:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:21:12.368 04:12:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:12.368 04:12:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:21:12.368 04:12:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:12.368 04:12:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:21:12.368 04:12:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:21:12.368 04:12:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:12.368 04:12:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:12.368 04:12:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:12.368 04:12:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:21:12.368 04:12:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:12.368 04:12:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:12.368 04:12:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:12.368 04:12:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:21:12.368 04:12:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:21:12.368 04:12:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:21:12.368 04:12:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "22595baf-b11c-4048-9efe-70f9261ae7fd"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "22595baf-b11c-4048-9efe-70f9261ae7fd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "22595baf-b11c-4048-9efe-70f9261ae7fd",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "f76a54f0-2b1b-41b6-8de4-2f65cd3043ea",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "59ea4b71-fdfc-44e4-9047-5d27fb304c08",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "6d00c3be-988f-4c59-bef6-77479ccc4957",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:21:12.368 04:12:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:21:12.627 04:12:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:21:12.627 04:12:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:12.627 /home/vagrant/spdk_repo/spdk 00:21:12.627 04:12:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:21:12.627 04:12:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:21:12.627 04:12:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:21:12.627 00:21:12.627 real 0m13.101s 00:21:12.627 user 0m13.129s 00:21:12.627 sys 0m0.774s 00:21:12.628 04:12:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:12.628 04:12:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:12.628 ************************************ 00:21:12.628 END TEST bdev_fio 00:21:12.628 ************************************ 00:21:12.628 04:12:05 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:12.628 04:12:05 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:12.628 04:12:05 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:12.628 04:12:05 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:12.628 04:12:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:12.628 ************************************ 00:21:12.628 START TEST bdev_verify 00:21:12.628 ************************************ 00:21:12.628 04:12:05 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:12.628 [2024-12-06 04:12:05.906803] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:21:12.628 [2024-12-06 04:12:05.906912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90790 ] 00:21:12.886 [2024-12-06 04:12:06.080643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:12.887 [2024-12-06 04:12:06.203072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.887 [2024-12-06 04:12:06.203111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.456 Running I/O for 5 seconds... 00:21:15.404 15582.00 IOPS, 60.87 MiB/s [2024-12-06T04:12:10.136Z] 15027.00 IOPS, 58.70 MiB/s [2024-12-06T04:12:11.083Z] 15025.00 IOPS, 58.69 MiB/s [2024-12-06T04:12:12.049Z] 15110.75 IOPS, 59.03 MiB/s [2024-12-06T04:12:12.049Z] 15166.80 IOPS, 59.25 MiB/s 00:21:18.695 Latency(us) 00:21:18.695 [2024-12-06T04:12:12.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.695 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:18.695 Verification LBA range: start 0x0 length 0x2000 00:21:18.695 raid5f : 5.02 7522.21 29.38 0.00 0.00 25583.50 100.16 24611.77 00:21:18.695 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:18.695 Verification LBA range: start 0x2000 length 0x2000 00:21:18.695 raid5f : 5.01 7629.45 29.80 0.00 0.00 25202.72 236.10 24955.19 00:21:18.695 [2024-12-06T04:12:12.049Z] =================================================================================================================== 00:21:18.695 [2024-12-06T04:12:12.049Z] Total : 15151.66 59.19 0.00 0.00 25391.96 100.16 24955.19 00:21:20.075 00:21:20.075 real 0m7.356s 00:21:20.075 user 0m13.608s 00:21:20.075 sys 0m0.278s 00:21:20.075 04:12:13 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:20.075 04:12:13 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:20.075 ************************************ 00:21:20.075 END TEST bdev_verify 00:21:20.075 ************************************ 00:21:20.075 04:12:13 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:20.075 04:12:13 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:20.075 04:12:13 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:20.075 04:12:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:20.075 ************************************ 00:21:20.075 START TEST bdev_verify_big_io 00:21:20.075 ************************************ 00:21:20.075 04:12:13 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:20.075 [2024-12-06 04:12:13.318151] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:21:20.075 [2024-12-06 04:12:13.318277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90888 ] 00:21:20.333 [2024-12-06 04:12:13.488786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:20.334 [2024-12-06 04:12:13.608831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.334 [2024-12-06 04:12:13.608869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.901 Running I/O for 5 seconds... 00:21:23.213 758.00 IOPS, 47.38 MiB/s [2024-12-06T04:12:17.501Z] 792.00 IOPS, 49.50 MiB/s [2024-12-06T04:12:18.438Z] 846.00 IOPS, 52.88 MiB/s [2024-12-06T04:12:19.372Z] 888.50 IOPS, 55.53 MiB/s [2024-12-06T04:12:19.372Z] 914.00 IOPS, 57.12 MiB/s 00:21:26.018 Latency(us) 00:21:26.018 [2024-12-06T04:12:19.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.018 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:26.018 Verification LBA range: start 0x0 length 0x200 00:21:26.018 raid5f : 5.19 465.35 29.08 0.00 0.00 6807477.17 146.67 296714.96 00:21:26.018 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:26.018 Verification LBA range: start 0x200 length 0x200 00:21:26.018 raid5f : 5.12 471.11 29.44 0.00 0.00 6685325.79 143.99 294883.38 00:21:26.018 [2024-12-06T04:12:19.372Z] =================================================================================================================== 00:21:26.018 [2024-12-06T04:12:19.372Z] Total : 936.46 58.53 0.00 0.00 6746401.48 143.99 296714.96 00:21:27.943 00:21:27.943 real 0m7.537s 00:21:27.943 user 0m13.993s 00:21:27.943 sys 0m0.267s 00:21:27.943 04:12:20 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:27.943 04:12:20 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:27.943 ************************************ 00:21:27.943 END TEST bdev_verify_big_io 00:21:27.943 ************************************ 00:21:27.943 04:12:20 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:27.943 04:12:20 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:27.943 04:12:20 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:27.943 04:12:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:27.943 ************************************ 00:21:27.943 START TEST bdev_write_zeroes 00:21:27.943 ************************************ 00:21:27.943 04:12:20 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:27.943 [2024-12-06 04:12:20.926539] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:21:27.943 [2024-12-06 04:12:20.926643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90982 ] 00:21:27.943 [2024-12-06 04:12:21.097965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.943 [2024-12-06 04:12:21.212942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.511 Running I/O for 1 seconds... 00:21:29.449 26415.00 IOPS, 103.18 MiB/s 00:21:29.449 Latency(us) 00:21:29.449 [2024-12-06T04:12:22.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.449 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:29.449 raid5f : 1.01 26394.33 103.10 0.00 0.00 4834.40 1545.39 6639.46 00:21:29.449 [2024-12-06T04:12:22.803Z] =================================================================================================================== 00:21:29.449 [2024-12-06T04:12:22.803Z] Total : 26394.33 103.10 0.00 0.00 4834.40 1545.39 6639.46 00:21:30.829 00:21:30.829 real 0m3.307s 00:21:30.829 user 0m2.940s 00:21:30.829 sys 0m0.239s 00:21:30.829 04:12:24 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:30.829 04:12:24 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:30.829 ************************************ 00:21:30.829 END TEST bdev_write_zeroes 00:21:30.829 ************************************ 00:21:31.089 04:12:24 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:31.089 04:12:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:31.089 04:12:24 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:31.089 04:12:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:31.089 ************************************ 00:21:31.089 START TEST bdev_json_nonenclosed 00:21:31.089 ************************************ 00:21:31.089 04:12:24 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:31.089 [2024-12-06 04:12:24.302964] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:21:31.089 [2024-12-06 04:12:24.303110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91041 ] 00:21:31.350 [2024-12-06 04:12:24.474347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.350 [2024-12-06 04:12:24.588925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.350 [2024-12-06 04:12:24.589204] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:31.350 [2024-12-06 04:12:24.589325] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:31.350 [2024-12-06 04:12:24.589386] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:31.610 00:21:31.610 real 0m0.624s 00:21:31.610 user 0m0.403s 00:21:31.610 sys 0m0.117s 00:21:31.610 04:12:24 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:31.610 04:12:24 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:31.610 ************************************ 00:21:31.610 END TEST bdev_json_nonenclosed 00:21:31.610 ************************************ 00:21:31.610 04:12:24 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:31.610 04:12:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:31.610 04:12:24 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:31.610 04:12:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:31.610 ************************************ 00:21:31.610 START TEST bdev_json_nonarray 00:21:31.610 ************************************ 00:21:31.610 04:12:24 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:31.870 [2024-12-06 04:12:24.991313] Starting SPDK v25.01-pre git sha1 a4d2a837b / DPDK 24.03.0 initialization... 00:21:31.870 [2024-12-06 04:12:24.991412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91066 ] 00:21:31.870 [2024-12-06 04:12:25.163319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.128 [2024-12-06 04:12:25.275868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.128 [2024-12-06 04:12:25.276159] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:32.128 [2024-12-06 04:12:25.276232] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:32.128 [2024-12-06 04:12:25.276260] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:32.386 00:21:32.386 real 0m0.620s 00:21:32.386 user 0m0.395s 00:21:32.386 sys 0m0.121s 00:21:32.386 04:12:25 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:32.386 04:12:25 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:32.386 ************************************ 00:21:32.386 END TEST bdev_json_nonarray 00:21:32.386 ************************************ 00:21:32.386 04:12:25 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:21:32.386 04:12:25 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:21:32.386 04:12:25 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:21:32.386 04:12:25 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:21:32.386 04:12:25 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:21:32.386 04:12:25 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:32.386 04:12:25 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:32.386 04:12:25 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:21:32.386 04:12:25 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:21:32.386 04:12:25 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:21:32.386 04:12:25 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:21:32.386 00:21:32.386 real 0m48.286s 00:21:32.386 user 1m5.440s 00:21:32.386 sys 0m4.689s 00:21:32.386 04:12:25 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:32.386 04:12:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:32.386 ************************************ 00:21:32.386 END TEST blockdev_raid5f 00:21:32.386 ************************************ 00:21:32.386 04:12:25 -- spdk/autotest.sh@194 -- # uname -s 00:21:32.386 04:12:25 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:21:32.386 04:12:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:32.386 04:12:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:32.386 04:12:25 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:21:32.387 04:12:25 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:21:32.387 04:12:25 -- spdk/autotest.sh@260 -- # timing_exit lib 00:21:32.387 04:12:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:32.387 04:12:25 -- common/autotest_common.sh@10 -- # set +x 00:21:32.387 04:12:25 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:21:32.387 04:12:25 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:21:32.387 04:12:25 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:21:32.387 04:12:25 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:32.387 04:12:25 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:32.387 04:12:25 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:32.387 04:12:25 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:32.387 04:12:25 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:32.387 04:12:25 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:32.387 04:12:25 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:32.387 04:12:25 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:32.387 04:12:25 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:21:32.387 04:12:25 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:32.387 04:12:25 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:21:32.387 04:12:25 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:32.387 04:12:25 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:32.387 04:12:25 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:21:32.387 04:12:25 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:21:32.387 04:12:25 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:21:32.387 04:12:25 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:21:32.387 04:12:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:32.387 04:12:25 -- common/autotest_common.sh@10 -- # set +x 00:21:32.387 04:12:25 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:21:32.387 04:12:25 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:21:32.387 04:12:25 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:21:32.387 04:12:25 -- common/autotest_common.sh@10 -- # set +x 00:21:34.991 INFO: APP EXITING 00:21:34.991 INFO: killing all VMs 00:21:34.991 INFO: killing vhost app 00:21:34.991 INFO: EXIT DONE 00:21:34.991 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:34.991 Waiting for block devices as requested 00:21:34.991 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:35.251 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:36.188 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:36.188 Cleaning 00:21:36.188 Removing: /var/run/dpdk/spdk0/config 00:21:36.188 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:36.188 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:36.188 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:36.188 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:36.188 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:36.188 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:36.188 Removing: /dev/shm/spdk_tgt_trace.pid57011 00:21:36.188 Removing: /var/run/dpdk/spdk0 00:21:36.188 Removing: /var/run/dpdk/spdk_pid56771 00:21:36.189 Removing: /var/run/dpdk/spdk_pid57011 00:21:36.189 Removing: /var/run/dpdk/spdk_pid57246 00:21:36.189 Removing: /var/run/dpdk/spdk_pid57350 00:21:36.189 Removing: /var/run/dpdk/spdk_pid57406 00:21:36.189 Removing: /var/run/dpdk/spdk_pid57534 00:21:36.189 Removing: /var/run/dpdk/spdk_pid57552 00:21:36.189 Removing: /var/run/dpdk/spdk_pid57762 00:21:36.189 Removing: /var/run/dpdk/spdk_pid57879 00:21:36.189 Removing: /var/run/dpdk/spdk_pid57986 00:21:36.189 Removing: /var/run/dpdk/spdk_pid58108 00:21:36.189 Removing: /var/run/dpdk/spdk_pid58216 00:21:36.189 Removing: /var/run/dpdk/spdk_pid58256 00:21:36.189 Removing: /var/run/dpdk/spdk_pid58292 00:21:36.189 Removing: /var/run/dpdk/spdk_pid58368 00:21:36.189 Removing: /var/run/dpdk/spdk_pid58492 00:21:36.189 Removing: /var/run/dpdk/spdk_pid58939 00:21:36.189 Removing: /var/run/dpdk/spdk_pid59016 00:21:36.189 Removing: /var/run/dpdk/spdk_pid59090 00:21:36.189 Removing: /var/run/dpdk/spdk_pid59112 00:21:36.189 Removing: /var/run/dpdk/spdk_pid59262 00:21:36.189 Removing: /var/run/dpdk/spdk_pid59279 00:21:36.189 Removing: /var/run/dpdk/spdk_pid59427 00:21:36.189 Removing: /var/run/dpdk/spdk_pid59443 00:21:36.189 Removing: /var/run/dpdk/spdk_pid59513 00:21:36.189 Removing: /var/run/dpdk/spdk_pid59534 00:21:36.189 Removing: /var/run/dpdk/spdk_pid59603 00:21:36.189 Removing: /var/run/dpdk/spdk_pid59627 00:21:36.189 Removing: /var/run/dpdk/spdk_pid59822 00:21:36.189 Removing: /var/run/dpdk/spdk_pid59864 00:21:36.189 Removing: /var/run/dpdk/spdk_pid59953 00:21:36.189 Removing: /var/run/dpdk/spdk_pid61295 00:21:36.189 Removing: /var/run/dpdk/spdk_pid61501 00:21:36.189 Removing: /var/run/dpdk/spdk_pid61647 00:21:36.189 Removing: /var/run/dpdk/spdk_pid62285 00:21:36.189 Removing: /var/run/dpdk/spdk_pid62496 00:21:36.189 Removing: /var/run/dpdk/spdk_pid62642 00:21:36.189 Removing: /var/run/dpdk/spdk_pid63285 00:21:36.189 Removing: /var/run/dpdk/spdk_pid63621 00:21:36.189 Removing: /var/run/dpdk/spdk_pid63761 00:21:36.189 Removing: /var/run/dpdk/spdk_pid65157 00:21:36.189 Removing: /var/run/dpdk/spdk_pid65410 00:21:36.189 Removing: /var/run/dpdk/spdk_pid65556 00:21:36.189 Removing: /var/run/dpdk/spdk_pid66944 00:21:36.189 Removing: /var/run/dpdk/spdk_pid67197 00:21:36.189 Removing: /var/run/dpdk/spdk_pid67344 00:21:36.189 Removing: /var/run/dpdk/spdk_pid68728 00:21:36.189 Removing: /var/run/dpdk/spdk_pid69168 00:21:36.189 Removing: /var/run/dpdk/spdk_pid69314 00:21:36.189 Removing: /var/run/dpdk/spdk_pid70809 00:21:36.189 Removing: /var/run/dpdk/spdk_pid71075 00:21:36.189 Removing: /var/run/dpdk/spdk_pid71228 00:21:36.448 Removing: /var/run/dpdk/spdk_pid72733 00:21:36.448 Removing: /var/run/dpdk/spdk_pid72992 00:21:36.448 Removing: /var/run/dpdk/spdk_pid73143 00:21:36.448 Removing: /var/run/dpdk/spdk_pid74652 00:21:36.448 Removing: /var/run/dpdk/spdk_pid75134 00:21:36.448 Removing: /var/run/dpdk/spdk_pid75285 00:21:36.448 Removing: /var/run/dpdk/spdk_pid75423 00:21:36.448 Removing: /var/run/dpdk/spdk_pid75865 00:21:36.448 Removing: /var/run/dpdk/spdk_pid76616 00:21:36.448 Removing: /var/run/dpdk/spdk_pid77038 00:21:36.448 Removing: /var/run/dpdk/spdk_pid77758 00:21:36.448 Removing: /var/run/dpdk/spdk_pid78217 00:21:36.448 Removing: /var/run/dpdk/spdk_pid78985 00:21:36.448 Removing: /var/run/dpdk/spdk_pid79424 00:21:36.448 Removing: /var/run/dpdk/spdk_pid81402 00:21:36.448 Removing: /var/run/dpdk/spdk_pid81851 00:21:36.448 Removing: /var/run/dpdk/spdk_pid82294 00:21:36.448 Removing: /var/run/dpdk/spdk_pid84389 00:21:36.448 Removing: /var/run/dpdk/spdk_pid84875 00:21:36.448 Removing: /var/run/dpdk/spdk_pid85402 00:21:36.448 Removing: /var/run/dpdk/spdk_pid86460 00:21:36.448 Removing: /var/run/dpdk/spdk_pid86777 00:21:36.448 Removing: /var/run/dpdk/spdk_pid87721 00:21:36.448 Removing: /var/run/dpdk/spdk_pid88041 00:21:36.448 Removing: /var/run/dpdk/spdk_pid88974 00:21:36.448 Removing: /var/run/dpdk/spdk_pid89298 00:21:36.448 Removing: /var/run/dpdk/spdk_pid89974 00:21:36.448 Removing: /var/run/dpdk/spdk_pid90254 00:21:36.448 Removing: /var/run/dpdk/spdk_pid90322 00:21:36.448 Removing: /var/run/dpdk/spdk_pid90369 00:21:36.448 Removing: /var/run/dpdk/spdk_pid90611 00:21:36.448 Removing: /var/run/dpdk/spdk_pid90790 00:21:36.448 Removing: /var/run/dpdk/spdk_pid90888 00:21:36.448 Removing: /var/run/dpdk/spdk_pid90982 00:21:36.448 Removing: /var/run/dpdk/spdk_pid91041 00:21:36.448 Removing: /var/run/dpdk/spdk_pid91066 00:21:36.448 Clean 00:21:36.448 04:12:29 -- common/autotest_common.sh@1453 -- # return 0 00:21:36.448 04:12:29 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:21:36.448 04:12:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:36.448 04:12:29 -- common/autotest_common.sh@10 -- # set +x 00:21:36.707 04:12:29 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:21:36.707 04:12:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:36.707 04:12:29 -- common/autotest_common.sh@10 -- # set +x 00:21:36.707 04:12:29 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:36.707 04:12:29 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:36.707 04:12:29 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:36.707 04:12:29 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:21:36.707 04:12:29 -- spdk/autotest.sh@398 -- # hostname 00:21:36.707 04:12:29 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:36.707 geninfo: WARNING: invalid characters removed from testname! 00:21:58.637 04:12:51 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:01.928 04:12:54 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:03.832 04:12:56 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:05.737 04:12:58 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:08.351 04:13:01 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:10.251 04:13:03 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:12.156 04:13:05 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:12.156 04:13:05 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:12.156 04:13:05 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:12.156 04:13:05 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:12.156 04:13:05 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:12.156 04:13:05 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:12.156 + [[ -n 5430 ]] 00:22:12.156 + sudo kill 5430 00:22:12.164 [Pipeline] } 00:22:12.179 [Pipeline] // timeout 00:22:12.184 [Pipeline] } 00:22:12.197 [Pipeline] // stage 00:22:12.202 [Pipeline] } 00:22:12.216 [Pipeline] // catchError 00:22:12.225 [Pipeline] stage 00:22:12.227 [Pipeline] { (Stop VM) 00:22:12.239 [Pipeline] sh 00:22:12.519 + vagrant halt 00:22:15.807 ==> default: Halting domain... 00:22:22.386 [Pipeline] sh 00:22:22.665 + vagrant destroy -f 00:22:25.972 ==> default: Removing domain... 00:22:25.985 [Pipeline] sh 00:22:26.269 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:22:26.279 [Pipeline] } 00:22:26.294 [Pipeline] // stage 00:22:26.300 [Pipeline] } 00:22:26.314 [Pipeline] // dir 00:22:26.319 [Pipeline] } 00:22:26.335 [Pipeline] // wrap 00:22:26.341 [Pipeline] } 00:22:26.356 [Pipeline] // catchError 00:22:26.368 [Pipeline] stage 00:22:26.371 [Pipeline] { (Epilogue) 00:22:26.387 [Pipeline] sh 00:22:26.678 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:33.252 [Pipeline] catchError 00:22:33.254 [Pipeline] { 00:22:33.267 [Pipeline] sh 00:22:33.549 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:33.549 Artifacts sizes are good 00:22:33.557 [Pipeline] } 00:22:33.573 [Pipeline] // catchError 00:22:33.583 [Pipeline] archiveArtifacts 00:22:33.589 Archiving artifacts 00:22:33.704 [Pipeline] cleanWs 00:22:33.721 [WS-CLEANUP] Deleting project workspace... 00:22:33.721 [WS-CLEANUP] Deferred wipeout is used... 00:22:33.739 [WS-CLEANUP] done 00:22:33.741 [Pipeline] } 00:22:33.757 [Pipeline] // stage 00:22:33.761 [Pipeline] } 00:22:33.775 [Pipeline] // node 00:22:33.780 [Pipeline] End of Pipeline 00:22:33.814 Finished: SUCCESS